While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: August 15, 2014.

1

Condensed history Monte Carlo methods for photon transport problems

We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions.

Bhan, Katherine; Spanier, Jerome

2007-01-01

2

National Technical Information Service (NTIS)

A Monte Carlo code (MOPET) has been developed to simulate the flux transport and energy deposition of photons transmitted through multiple slab absorbers. Photon energies may range from 1 keV to 300 keV where a black body or monoenergetic incident source ...

D. G. Simons S. W. Madigosky

1969-01-01

3

Modelling photon transport in non-uniform media for SPECT with a vectorized Monte Carlo code

A vectorized Monte Carlo code has been developed for modelling photon transport in nonuniform media for single-photon-emission computed tomography (SPECT). The code is designed to compute photon detection kernels, which are used to build system matrices for simulating SPECT projection data acquisition and for use in matrix-based image reconstruction. Nonuniform attenuating and scattering regions are constructed from simple 3D geometric

M. F. Smith

1993-01-01

4

Overview of physical interaction models for photon and electron transport used in Monte Carlo codes

NASA Astrophysics Data System (ADS)

The physical principles and approximations employed in Monte Carlo simulations of coupled electron-photon transport are reviewed. After a brief analysis of the assumptions underlying the trajectory picture used to generate random particle histories, we concentrate on the physics of the various interaction processes of photons and electrons. For each of these processes we describe the theoretical models and approximations that lead to the differential cross sections employed in general-purpose Monte Carlo codes. References to relevant publications and data resources are also provided.

Salvat, Francesc; Fernández-Varea, José M.

2009-04-01

5

A GPU implementation of EGSnrc's Monte Carlo photon transport for imaging applications

NASA Astrophysics Data System (ADS)

EGSnrc is a well-known Monte Carlo simulation package for coupled electron-photon transport that is widely used in medical physics application. This paper proposes a parallel implementation of the photon transport mechanism of EGSnrc for graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). The implementation is specifically designed for imaging applications in the diagnostic energy range and does not model electrons. No approximations or simplifications of the original EGSnrc code were made other than using single floating-point precision instead of double precision and a different random number generator. To avoid performance penalties due to the random nature of the Monte Carlo method, the simulation was divided into smaller steps that could easily be performed in a parallel fashion suitable for GPUs. Speedups of 20 to 40 times for 643 to 2563 voxels were observed while the accuracy of the simulation was preserved. A detailed analysis of the differences between the CUDA simulation and the original EGSnrc was conducted. The two simulations were found to produce equivalent results for scattered photons and an overall systematic deviation of less than 0.08% was observed for primary photons.

Lippuner, Jonas; Elbakri, Idris A.

2011-11-01

6

Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

2014-06-01

7

TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

Cullen, D.E.

1997-11-22

8

COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport

NASA Astrophysics Data System (ADS)

Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.

Hayward, Robert M.; Rahnema, Farzad

2014-06-01

9

Space applications of the MITS electron-photon Monte Carlo transport code system

The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.

Kensek, R.P.; Lorence, L.J.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)

1996-07-01

10

During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

2010-01-01

11

Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

2009-11-15

12

The MOSSO Program for Monte Carlo simulation of electron and photon transport

A model was developed to simulate the transport of electrons, positrons, and {gamma}-photons of energies ranging from 0.001 to 100 MeV in laminate multicomponent structures. The software environment created for this program facilitates the following: modeling the radiation source, preparing the structure to be investigated (materials and geometry), controlling the imitation of particle penetration through the layers, and analyzing the absorbed doses and the spectra of primary knocked-off atoms.

Konyukov, V.V.; Krainyukov, V.I.; Maev, G.A.; Nosyrev, V.I.; Trufanov, A.I. [Irkutsk State Technical Univ. (Russian Federation)

1995-03-01

13

NASA Astrophysics Data System (ADS)

Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.

Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun

2012-05-01

14

The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.

Morgan C. White

2000-07-01

15

The MC21 Monte Carlo Transport Code

MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

2007-01-09

16

Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

2014-01-01

17

Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.

Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

2014-01-01

18

Monte Carlo simulation of linearly polarized photons

The collision routine actually used in computer codes for Monte Carlo simulation of (partially) linearly polarized photons follows essentially one intuitive approach which simulates the unpolarized fraction of the beam by generating many polarized photons with their electric field vectors randomly oriented on the polarization plane. Clearly, this approach is more inefficient for simulating unpolarized than polarized radiation, and produces

J. E. Fernández

1997-01-01

19

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2005-09-01

20

Treatment of Compton scattering of linearly polarized photons in Monte Carlo codes

NASA Astrophysics Data System (ADS)

The basic formalism of Compton scattering of linearly polarized photons is reviewed, and some simple prescriptions to deal with the transport of polarized photons in Monte Carlo simulation codes are given. Fortran routines, based on the described method, have been included in MCNP, a widely used code for neutrons, photons and electrons transport. As this improved version of the code can be of general use, the implementation and the procedures to employ the new version of the code are discussed.

Matt, Giorgio; Feroci, Marco; Rapisarda, Massimo; Costa, Enrico

1996-10-01

21

The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

2000-03-01

22

THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

2007-01-10

23

Implementation of a Monte Carlo method to model photon conversion for solar cells

A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different

C. del Cañizo; I. Tobías; J. Perezbedmar; A. C. Pan; A. Luque

2008-01-01

24

PENELOPE, and algorithm and computer code for Monte Carlo simulation of electron-photon showers.

National Technical Information Service (NTIS)

The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from similar(sub t)o 1 KeV to several hundred MeV. Photon transport is simulated by means of the standard, detailed...

F. Salvat J. M. Fernandez-Varea J. Baro J. Sempau

1996-01-01

25

Automated Monte Carlo biasing for photon-generated electrons near surfaces.

This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

2009-09-01

26

Photon transport in binary photonic lattices

NASA Astrophysics Data System (ADS)

We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.

Rodríguez-Lara, B. M.; Moya-Cessa, H.

2013-03-01

27

Vectorization of Monte Carlo particle transport

Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.

Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. (Colorado State Univ., Fort Collins, CO (USA). Computer Center; Los Alamos National Lab., NM (USA); Supercomputing Research Center, Bowie, MD (USA))

1989-01-01

28

Monte Carlo Method and Transport in Plant Canopies Equation

Plant canopy reflectance is calculated usinlg the got~ernin~ equation f)r photon transport. The inte- gral equation of tran.sfer is solved by the Monte Carlo method. The main emphasis is on statistical estimation and simulation of the Marcov chain. The leaf dimensions are taken into account in obtaining the hot-spot effect of the canopy. Finally, nmnerieal results fi~r transport equation obtained

Victor S. Antyufeev; Alexander L. Marshak

29

The many applications of Monte Carlo modeling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and, thus, on the cross-section libraries used for photon-transport calculations. A comparison between different photon cross-section libraries and parameterizations implemented

Habib Zaidi

2000-01-01

30

Precise Monte Carlo simulation of single-photon detectors

NASA Astrophysics Data System (ADS)

We demonstrate the importance and utility of Monte Carlo simulation of single-photon detectors. Devising an optimal simulation is strongly influenced by the particular application because of the complexity of modern, avalanche-diode based single-photon detectors. Using a simple yet very demanding example of random number generation via detection of Poissonian photons exiting a beam splitter, we present a Monte Carlo simulation that faithfully reproduces the serial autocorrelation of random bits as a function of detection frequency over four orders of magnitude of the incident photon flux. We conjecture that this simulation approach can be easily modified for use in many other applications.

Stip?evi?, Mario; Gauthier, Daniel J.

2013-05-01

31

Overview of Monte Carlo radiation transport codes

The Radiation Safety Information Computational Center (RSICC) is the designated central repository of the United States Department of Energy (DOE) for nuclear software in radiation transport, safety, and shielding. Since the center was established in the early 60's, there have been several Monte Carlo particle transport (MC) computer codes contributed by scientists from various countries. An overview of the neutron transport computer codes in the RSICC collection is presented.

Kirk, Bernadette Lugue [ORNL

2010-01-01

32

Enhanced Electron-Photon Transport in MCNP6

NASA Astrophysics Data System (ADS)

Recently a variety of improved and enhanced methods for low-energy photon/electron transport have been developed for the Monte Carlo particle transport code MCNP6. Aspects of this development include a significant reworking of the MCNP coding to allow for consideration of much more detail in atomic relaxation processes, new algorithms for reading and processing the Evaluated-Nuclear-Data-File photon, electron, and relaxation data capable of supporting such detailed models, and extension of the electron/photon transport energy range below the traditional 1-kilovolt limit in MCNP, with the goal of performing transport of electrons and photons down to energies in the few-electron-volt range. In this paper we provide an overview of these developments.

Hughes, H. Grady

2014-06-01

33

MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

Forster, R.A.; Little, R.C.; Briesmeister, J.F.

1989-01-01

34

Monte Carlo simulation of photon way in clinical laser therapy

NASA Astrophysics Data System (ADS)

The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.

Ionita, Iulian; Voitcu, Gabriel

2011-06-01

35

A number of accelerated Monte Carlo (MC) codes have been developed in recent years for brachytherapy applications, one of which is PTRAN_CT. Developed as an extension to the well-benchmarked PTRAN code, PTRAN_CT can be used to perform efficient patient-specific dose calculations. The code can explicitly account for the patient geometry converted from computed-tomography (CT) images, as well as perturbations due

E. Poon; Y. Le; J. F. Williamson; F. Verhaegen

2008-01-01

36

Photon beam description in PEREGRINE for Monte Carlo dose calculations

Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

Cox, L. J., LLNL

1997-03-04

37

NASA Astrophysics Data System (ADS)

Organic photovoltaics(OPVs) have received increasing attention as alternatives to inorganic solar cells. To understand the physics of OPVs, the dynamic Monte Carlo(DMC) method for simulating exciton and charge carrier movements has been regarded as a suitable method. However, simulation of light absorption has been ignored. We presented a simulation of the performance of OPVs by DMC method with solving the Maxwell equation for light absorption. We especially focused on the ordered bulk heterojunction(OBHJ) OPV which is composed of P3HT and PCBM. Our analysis indicated that locations of light absorption are different at different wavelength, which suggests that the simulation of light absorption is essential. In the wavelength of 300 to 400 nm, light absorption occurred dominantly nearby the interface between the P3HT and PCBM. This implies that the generated exciton can be more efficiently dissociated into the free charges. For wavelength longer than 400 nm, most of light are absorbed away from the interface between the P3HT and PCBM. As a result of this, the internal quantum efficiencies gradually decrease from 44.6% to 30.2% as the wavelength increases from 300 to 700 nm.

Jung, Buyoung

2012-02-01

38

Photon beam characterization and modelling for Monte Carlo treatment planning

NASA Astrophysics Data System (ADS)

Photon beams of 4, 6 and 15 MV from Varian Clinac 2100C and 2300C/D accelerators were simulated using the EGS4/BEAM code system. The accelerators were modelled as a combination of component modules (CMs) consisting of a target, primary collimator, exit window, flattening filter, monitor chamber, secondary collimator, ring collimator, photon jaws and protection window. A full phase space file was scored directly above the upper photon jaws and analysed using beam data processing software, BEAMDP, to derive the beam characteristics, such as planar fluence, angular distribution, energy spectrum and the fractional contributions of each individual CM. A multiple-source model has been further developed to reconstruct the original phase space. Separate sources were created with accurate source intensity, energy, fluence and angular distributions for the target, primary collimator and flattening filter. Good agreement (within 2%) between the Monte Carlo calculations with the source model and those with the original phase space was achieved in the dose distributions for field sizes of 4 cm × 4 cm to 40 cm × 40 cm at source surface distances (SSDs) of 80-120 cm. The dose distributions in lung and bone heterogeneous phantoms have also been found to be in good agreement (within 2%) for 4, 6 and 15 MV photon beams for various field sizes between the Monte Carlo calculations with the source model and those with the original phase space.

Deng, Jun; Jiang, Steve B.; Kapur, Ajay; Li, Jinsheng; Pawlicki, Todd; Ma, C.-M.

2000-02-01

39

Monte Carlo radiation transport¶llelism

This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

Cox, L. J. (Lawrence J.); Post, S. E. (Susan E.)

2002-01-01

40

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

1999-02-09

41

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

1999-01-01

42

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

43

Vertical Photon Transport in Cloud Remote Sensing Problems

NASA Technical Reports Server (NTRS)

Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

Platnick, S.

1999-01-01

44

Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

NASA Astrophysics Data System (ADS)

The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.

Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

2014-06-01

45

Errors in glass photon transport calculation

A calculational capability for photon sources and photon transport in a reactor lattice was added to the GLASS system in 1973. The calculation has been used in a variety of applications since 1973, and has always produced results that appear reasonable. The GLASS photon transport calculation, however, was never compared to an independent photon transport calculation at any state of its development. Recently, the GLASS calculation was compared to calculations performed by the SHIELD system module SNONE (SHIELD system version of LASL DTF-IV code) and significant differences were found in the calculation of deposited photon heat. This led to discovery of certain errors in the GLASS calculations, as discussed in this report.

Finch, D.R.

1981-03-03

46

A Heterogeneous Coarse Mesh Method for Coupled Photon Electron Transport Problems

A hybrid Monte Carlo\\/deterministic coarse mesh transport method (COMET-PE) has been developed for pure photon or coupled photon and electron transport in heterogeneous problems. The accuracy of the method was evaluated in two highly stylized 2D benchmark problems: (1) a homogeneous rectangular water phantom and (2) a heterogeneous problem that is typical of a 2D vertical slice of lung. It

Dingkang Zhang; Farzad Rahnema

2011-01-01

47

Monte Carlo simulation of photon coherent behavior in half Iinfinite turbid medium by scaling method

NASA Astrophysics Data System (ADS)

Monte Carlo simulation procedure is accelerated by scaling method based on baseline data from standard Monte Carlo calculation in turbid medium. Gaussian beam is modeled by hyperboloid of one sheet for actual condition to obtain distribution of photons on sample surface. Depth dependence coherent signal and photons distribution are calculated in this way, which is important to reconstruction of optical parameters by inverse Monte Carlo. Numerical results have verified this method in turbid medium of different optical parameters with acceptable relative errors.

Lin, Lin; Zhang, Mei; Liu, Huazhu

2012-12-01

48

National Technical Information Service (NTIS)

A description of the BETA-2 program for Monte Carlo calculation of electron and photon transport in complex geometries is presented. The program description includes a users manual describing the preparation of input data cards, the printout from a sample...

T. M. Jordan

1971-01-01

49

Thread Divergence and Photon Transport on the GPU (U). LA-UR-13-27057

NASA Astrophysics Data System (ADS)

Monte Carlo methods are commonly used to solve numerically the particle transport problems. A major disadvantage to Monte Carlo methods is the time required to obtain accurate solutions. Graphical Processing Units (GPUs) have increased in use as accelerators for improving performance in high-performance computing. Extracting the best performance from GPUs places requires careful consideration on code execution and data movement. In particular, performance can be reduced if threads diverge due to branching, and Monte Carlo codes are susceptible to branching penalties. We explore different schemes to reduce thread divergence in photonics transport and report on our performance findings.

Aulwes, Rob T.; Zukaitis, Anthony

2014-06-01

50

PARALLELIZATION OF THE PENELOPE MONTE CARLO PARTICLE TRANSPORT SIMULATION PACKAGE

We have parallelized the PENELOPE Monte Carlo particle transport simulation package (1). The motivation is to increase efficiency of Monte Carlo simulations for medical applications. Our parallelization is based on the standard MPI message passing interface. The parallel code is especially suitable for a distributed memory environment, and has been run on up to 256 processors on the Indiana University

R. B. Cruise; R. W. Sheppard; V. P. Moskvin

2003-01-01

51

Monte Carlo Heat Conduction Using the Transport Equation Approximation.

National Technical Information Service (NTIS)

The use of Monte Carlo radiation transport codes to solve heat conduction problems was shown to be applicable to steady state and time dependent multi-media problems. An improved method for treating problems with given surface temperature distributions is...

S. K. Fraley

1977-01-01

52

Photonic sensor applications in transportation security

NASA Astrophysics Data System (ADS)

There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.

Krohn, David A.

2007-10-01

53

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the {sup 60}Co source and radiated isotropically in a solid angle of 4{pi}. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the {sup 60}Co source and the beam channel also contribute to the output spectra.

Cheung, Joel Y.C.; Yu, K.N. [Gamma Knife Centre, Canossa Hospital, 1 Old Peak Road, Hong Kong (China); Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong (China)

2006-01-15

54

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the 60Co source and radiated isotropically in a solid angle of 4pi. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the 60Co source and the beam channel also contribute to the output spectra. PMID:16485407

Cheung, Joel Y C; Yu, K N

2006-01-01

55

A simple model of photon transport

In this paper I describe a simple model of photon transport. This simple model includes: tabulated cross sections and average expected energy losses for all elements between hydrogen (Z = 1) and fermium (Z = 100) over the enrgy range 10 eV to 1 GeV, simple models to analytically describe coherent and incoherent scattering, and a simple model to describe

Dermott E. Cullen

1995-01-01

56

Review of Monte Carlo modeling of light transport in tissues

NASA Astrophysics Data System (ADS)

A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and frequency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse optical tomography are briefly surveyed. Finally, the potential directions for the future development of the MC method in tissue optics are discussed.

Zhu, Caigang; Liu, Quan

2013-05-01

57

Monte Carlo simulations of phonon transport in silicon

In this paper, the development of a computational procedure to simulate thermal transport in small semiconductor structures was described. On a microscopic scale, heat transport can be described mathematically using a Boltzmann equation for phonons. Direct numerical solution of this equation is difficult, without extensive approximation, because of the quantity and complexity of the anharmonic phonon-phonon interactions. Therefore, Monte Carlo

A. Asokan; R. W. Kelsall

2004-01-01

58

Benchmark calculations for Monte Carlo simulations of electron transport

Benchmark calculations have been performed for electron transport coefficients with an aim to produce a body of data required to verify the codes used in plasma modeling. The present code for the time resolved Monte Carlo simulation (MCS) was shown to represent properly DC transport coefficients in a purely electric field, in crossed electric and magnetic fields, and in the

Z. M. Raspopovic; S. Sakadzic; S. A. Bzenic; Z. Lj. Petrovic

1999-01-01

59

3D extension of the Monte Carlo code MCSHAPE for photon matter interactions in heterogeneous media

NASA Astrophysics Data System (ADS)

MCSHAPE is a Monte Carlo code for the simulation of gamma and X-ray diffusion in matter which gives a general description of the evolution of the polarisation state of the photons. The model is derived from the so-called 'vector' transport equation. The three-dimensional (3D) version of the code can accurately simulate the propagation of photons in heterogeneous media originating from either polarised (i.e. synchrotron) or unpolarised sources, such as X-ray tubes. Photoelectric effect, Rayleigh and Compton scattering, the three most important interaction types for photons in the considered energy range (1-1000 keV), are included in the simulation with the state-of-art extent of detail. In this paper, the 3D version of the code MCSHAPE is presented. The sample is described using the so-called voxel model. Results from the validation studies and applications of the code to scanning XRF and XRF tomography experiments are discussed.

Scot, V.; Fernandez, J. E.; Vincze, L.; Janssens, K.

2007-10-01

60

Purpose: This paper presents the results of a series of calculations to determine buildup factors for ordinary concrete, baryte concrete, lead, steel, and iron in broad beam geometry for photons energies from 0.125 to 25.125 MeV at 0.250 MeV intervals.Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials.Results: The computation of the primary broad beams using buildup factors data was done for nine published megavoltage photon beam spectra ranging from 4 to 25 MV in nominal energies, representing linacs made by the three major manufacturers. The first tenth value layer and the equilibrium tenth value layer are calculated from the broad beam transmission for these nine primary megavoltage photon beam spectra.Conclusions: The results, compared with published data, show the ability of these buildup factor data to predict shielding transmission curves for the primary radiation beam. Therefore, the buildup factor data can be combined with primary, scatter, and leakage x-ray spectra to perform computation of broad beam transmission for barriers in radiotherapy shielding x-ray facilities.

Karim Karoui, Mohamed [Faculte des Sciences de Monastir, Avenue de l'environnement 5019 Monastir -Tunisia (Tunisia); Kharrati, Hedi [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia)

2013-07-15

61

Monte Carlo simulations of the transport of sputtered particles

NASA Astrophysics Data System (ADS)

Program SPATS models the transport of neutral particles during magnetron sputtering deposition. The 3D Monte Carlo simulation provides information about spatial distribution of the fluxes, density of the sputtered particles in the chamber glow discharge area, and kinetic energy distribution of the arrival flux. Collision events are modelled by scattering in Biersack's potential, Lennard-Jones potential, or by binary hard sphere collision approximation. The code has an interface for Monte Carlo TRIM simulated results of the sputtered particles.

Macàk, Karol; Macàk, Peter; Helmersson, Ulf

1999-08-01

62

Monte Carlo Methods for Neutrino Transport in Core Collapse Supernovae

NASA Astrophysics Data System (ADS)

Core-collapse supernovae are among the most powerful events in Nature. Despite decades of effort, the details of the explosion mechanism remain uncertain. Recent studies indicate that the neutrino-driven explosion mechanism is a fundamentally three-dimensional phenomenon, implying that it is necessary to model such an event in three dimensions using large parallel supercomputers. Monte Carlo methods for radiation transport have been known for their simplicity and ease of parallel implementation. In this talk, I will present results of our explorations of Monte Carlo methods for neutrino transport in core-collapse supernovae.

Abdikamalov, Ernazar; Burrows, Adam; Loeffler, Frank; Ott, Christian D.; Schnetter, E.; Diener, Peter

2011-04-01

63

Disorder-enhanced transport in photonic quasicrystals.

Quasicrystals are aperiodic structures with rotational symmetries forbidden to conventional periodic crystals; examples of quasicrystals can be found in aluminum alloys, polymers, and even ancient Islamic art. Here, we present direct experimental observation of disorder-enhanced wave transport in quasicrystals, which contrasts directly with the characteristic suppression of transport by disorder. Our experiments are carried out in photonic quasicrystals, where we find that increasing disorder leads to enhanced expansion of the beam propagating through the medium. By further increasing the disorder, we observe that the beam progresses through a regime of diffusive-like transport until it finally transitions to Anderson localization and the suppression of transport. We study this fundamental phenomenon and elucidate its origins by relating it to the basic properties of quasicrystalline media in the presence of disorder. PMID:21566156

Levi, Liad; Rechtsman, Mikael; Freedman, Barak; Schwartz, Tal; Manela, Ofer; Segev, Mordechai

2011-06-24

64

Wavefunction Monte Carlo for Transport in Open Quantum Systems

NASA Astrophysics Data System (ADS)

The wave function Monte Carlo method is a technique for solving the stochastic differential equation associated with the master equation (Lindblad equation) for transport in an open quantum system. For an anisotropic, spin 1/2, XXZ Heisenberg chain in an external magnetic field, whose ends interact with heat baths, we compute the heat transport through the chain as a function of chain length, temperature difference at the ends, and the anisotropy of the chain's exchange interaction from both a wavefunction Monte Carlo simulation and a deterministic solution of the master equation for the open system's density matrix. Having both solutions creates benchmarks for the more fundamental objective of studying the consequence of replacing a piecewise deterministic step, which is typically part of the wavefunction Monte Carlo method, with a stochastic step. This replacement affords the potential of simulating longer chain lengths.

Gubernatis, James

2013-03-01

65

MORSE Monte Carlo radiation transport code system

This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

Emmett, M.B.

1983-02-01

66

Monte Carlo simulations of the transport of sputtered particles

Program SPATS models the transport of neutral particles during magnetron sputtering deposition. The 3D Monte Carlo simulation provides information about spatial distribution of the fluxes, density of the sputtered particles in the chamber glow discharge area, and kinetic energy distribution of the arrival flux. Collision events are modelled by scattering in Biersack's potential, Lennard-Jones potential, or by binary hard sphere

Karol Macàk; Peter Macàk; Ulf Helmersson

1999-01-01

67

A maximum likelihood method for linking particle-in-cell and Monte-Carlo transport simulations

NASA Astrophysics Data System (ADS)

The expectation-maximization (E-M) algorithm [Dempster et al., J. R. Stat. Soc. B 39 (1977) 1-38] is a maximum likelihood technique to estimate the probability density function (PDF) of a set of measurements. A high performance implementation of the E-M algorithm to characterize multidimensional data sets using a PDF parameterized as a Gaussian mixture was developed. The resulting PDFs compare favorably to histogram based techniques—no binning artifacts and less noisy (especially in the tails). The motivation, the mathematical properties and the implementation details will be discussed. The PDF estimator is used extensively in the radiographic chain model [Kwan et al., Comput. Phys. Comm. 142 (2001) 263-269] in simulations which quantify bremsstrahlung X-ray emission from rod-pinch diodes and other devices. In these devices, electrons hit an anode and produce X-ray photons. The PIC code MERLIN [Kwan and Snell, in: Lecture Notes in Physics, Springer, 1985] is used to model the dynamics of a low-energy (up to ˜2.25 MeV) radiographic electron source. The photon production is modeled with the Monte-Carlo transport code MCNP [Briesmeister, ed., MCNP—A General Monte Carlo N-Particle Transport Code, 2000]. The estimator is used to upsample and uniformly weight the PIC electrons to provide a suitable population for the Monte-Carlo calculation that would be computationally prohibitive to generate directly.

Bowers, Kevin J.; Devolder, Barbara G.; Yin, Lin; Kwan, Thomas J. T.

2004-12-01

68

Design, Implementation and Optimization of a Parallel Monte Carlo Particle Transport Code.

National Technical Information Service (NTIS)

The design, implementation and optimization of a parallel, Monte Carlo particle transport code is presented. MERCURY is a modern Monte Carlo code being developed at the Lawrence Livermore National Laboratory (LLNL). It is capable of modeling the transport...

2004-01-01

69

Efficient, automated Monte Carlo methods for radiation transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu

2008-11-20

70

Efficient, Automated Monte Carlo Methods for Radiation Transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.

Kong, Rong; Ambrose, Martin; Spanier, Jerome

2012-01-01

71

Optix: A Monte Carlo scintillation light transport code

NASA Astrophysics Data System (ADS)

The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.

Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi

2014-02-01

72

Equivalence of four Monte Carlo methods for photon migration in turbid media.

In the field of photon migration in turbid media, different Monte Carlo methods are usually employed to solve the radiative transfer equation. We consider four different Monte Carlo methods, widely used in the field of tissue optics, that are based on four different ways to build photons' trajectories. We provide both theoretical arguments and numerical results showing the statistical equivalence of the four methods. In the numerical results we compare the temporal point spread functions calculated by the four methods for a wide range of the optical properties in the slab and semi-infinite medium geometry. The convergence of the methods is also briefly discussed. PMID:23201658

Sassaroli, Angelo; Martelli, Fabrizio

2012-10-01

73

Monte-Carlo Study of Axonal Transport in a Neuron

NASA Astrophysics Data System (ADS)

A living cell has an infrastructure much like that of a city. A key component is the transportation system that consists of roads (filaments) and molecular motors (proteins) that haul cargo along these roads. We will present a Monte Carlo simulation of intracellular transport inside an axon in which motor proteins carry cargos along microtubules and are able to switch from one microtubule to another. The breakdown of intracellular transport in neurons has been associated with neurodegenerative diseases such as Alzheimer's, Lou Gehig's disease (ALS), and Huntingdon's disease.

Shrestha, Uttam; Yu, Clare; Jia, Zhiyuan; Erickson, Robert; Gross, Steven

2011-03-01

74

Testing Monte Carlo computer codes for simulations of electron transport in matter.

In this paper, three Monte Carlo codes were tested for electron transport in various materials. MCNPX (version 2.4.0), Penelope (version 2003) and EGSnrc codes were used for modeling simple problems. These problems were focused on bremsstrahlung, energy deposition in matter, electron ranges and production of secondary electrons by gamma radiation. The electrons were primary particles, except in the last exercise, where photons were used. Various materials, e.g., water, lead and tungsten were used. The energy of the primary particles was within the energy range from 20 to 450 keV. The simulation results were compared with each other. PMID:20116266

Sídlová, Vera; Trojek, Tomás

2010-01-01

75

Monte Carlo simulation of photon densities inside the dermis in LLLT (low level laser therapy)

NASA Astrophysics Data System (ADS)

In this work, the photon distribution of He:Ne laser within dermis tissue is studied. The dermis as a highly scattering media was irradiated by a low power laser. The photon densities as well as the corresponding isothermal contours were obtained by two different numerical methods, i.e., Lambert-Beer and Welch. The results were compared to that of Monte Carlo subsequently.

Parvin, Parviz; Eftekharnoori, Somayeh; Dehghanpour, Hamid Reza

2009-09-01

76

Arbitrary perturbations in Monte Carlo neutral-particle transport

NASA Astrophysics Data System (ADS)

Monte Carlo techniques are widely used to model particle transport in complex models, as both the transport physics and level of geometric details can be simulated with arbitrary precision. The major draw-back of the Monte Carlo method is its computational cost. This is particularly true in design studies, where the effects of small changes in the model may be masked by statistical fluctuations unless prohibitively long simulation times are used. Perturbation methods have been developed to model the effects of small changes in material density, composition or reaction cross-sections. In this paper, I describe how this approach can be extended to allow nearly arbitrary perturbations in the transport problem specification to be made, including material properties, the model geometry and the radiation source description. The major problem, handling arbitrary variations in the model geometry, is overcome using a modified form of the Woodcock neutral-particle tracking algorithm. The approach has been implemented as an extension to the general-purpose Monte Carlo code EGSnrc. I discuss the details of this implementation, including how the specification of a perturbation simulation can be generated automatically from two or more unperturbed simulation models. I present an example of the application of the method to the modelling of a simple X-ray fluorescence instrument.

Tickner, James

2014-06-01

77

GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media

NASA Astrophysics Data System (ADS)

Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and the peculiarities of optical radiation propagation therein. Description of photon migration within the random media is based on the radiative transfer that forms a basis of Monte Carlo modelling of light propagation in complex turbid media like biological tissues. In current presentation with a further development of the Monte Carlo technique we introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provide an opportunity to escalate the performance of standard Monte Carlo simulation over 100 times.

Doronin, Alex; Meglinski, Igor

2010-10-01

78

Confidence interval procedures for Monte Carlo transport simulations

Monte Carlo simulations performed with radiation transport codes such as MCNP are extensively used to model radiation transport. Resulting sampled tally distributions can be extremely right-skewed. This skewness hampers valid confidence interval (CI) formation, lengthening the time needed to obtain a sample representative of the entire underlying distribution. The relevant concern is whether the number of histories n is large enough so that the standardized sample mean follows an approximately Gaussian distribution; i.e., when the conditions of the central limit theorem are satisfied. When these conditions are satisfied, CIs are valid in that they cover the true mean at the proper rate.

Pederson, S.P.; Forster, R.A.; Booth, T.E. [Los Alamos National Lab., NM (United States)

1995-12-31

79

Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames

A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)

Mehta, R.S.; Haworth, D.C.; Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States)

2010-05-15

80

Monte Carlo Characterization of a Highly Efficient Photon Detector

Highly efficient photon detectors play a major role in countless applications in physics, nuclear engineering, and medical physics. In nuclear engineering, radioactive waste can be characterized with techniques such as the nondestructive assay technique (PNDA). In medical physics, photon detectors are extensively used for diagnostic X-ray and computerized tomography (CT) imaging, nuclear medicine, and quite recently radiation therapy of cancer.1,2 In radiation therapy of cancer, ever more accurate delivery techniques spur the need for efficient detectors of the high-energetic photons in the mega-electron-volt energy range in order to allow the imaging of the patient during radiation delivery. In particular, in tomotherapy, a megavoltage detector is used for both CT imaging and verifying the dose received by the patients. Conventional megavoltage detection systems usually suffer from intrinsically low subject contrast.2 A high signal-to-noise ratio of the detection system can be achieved by keeping the noise as low as possible and/or by increasing the quantum efficiency of the detector. In this work, a candidate of a highly efficient detection system, i.e., an arc-shaped xenon gas ionization chamber, was characterized in terms of efficiency and spatial resolution.

Harry Keller; M. Glass; R. Hinderer; K. Ruchala; R. Jeraj; G. Olivera; T. R. Mackie; M. L. Corradini

2001-06-17

81

Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

NASA Astrophysics Data System (ADS)

This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

Slattery, Stuart R.

82

Neutron and Photon Transport in Sea-Going Cargo Containers

Factors affecting sensing of small quantities of fissionable material in large sea-going cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of the results to cases not specifically considered here.

Pruet, J; Descalle, M; Hall, J; Pohl, B; Prussin, S G

2005-02-09

83

Neutron and photon transport in seagoing cargo containers

Factors affecting sensing of small quantities of fissionable material in large seagoing cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of our results to cases not specifically considered here.

Pruet, J.; Descalle, M.-A.; Hall, J.; Pohl, B.; Prussin, S.G. [Lawrence Livermore National Laboratory, N-Division, 7000 East Avenue, Livermore, California 94550 (United States); Department of Nuclear Engineering, University of California at Berkeley (United States)

2005-05-01

84

Current status of the PSG Monte Carlo neutron transport code

PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

Leppaenen, J. [VTT Technical Research Centre of Finland, Laempoemiehenkuja 3, Espoo, FI-02044 VTT (Finland)

2006-07-01

85

Resonance fluorescence near a photonic band edge: Dressed-state Monte Carlo wave-function approach

NASA Astrophysics Data System (ADS)

We introduce a dressed-state Monte Carlo wave-function technique to describe resonance fluorescence in a broad class of non-Markovian reservoirs with strong atom-reservoir interaction. The method recaptures photon localization effects which are beyond the Born and Markovian approximations, and describes the influence of the driving field on the atom-reservoir interaction. Using this approach, we predict a number of fundamentally new features in resonance fluorescence near the edge of a photonic band gap. In particular, the atomic population exhibits inversion for moderate applied field intensity. For a low external field intensity, the atomic system retains a long-time memory of its initial state.

Quang, Tran; John, Sajeev

1997-11-01

86

The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of detector responses and dose conversion factors, as well as increases of the RBE have to be anticipated. Parameter P(D)(200 keV) can also be used as a guidance supporting the selection of a calibration geometry suitable for radiation dosimeters to be used in small radiation fields. PMID:21530198

Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

2011-09-01

87

\\u000a Radiative heat transfer plays a central role in many combustion and engineering applications. Because of its highly nonlinear\\u000a and nonlocal nature, the computational cost can be extremely high to model radiative heat transfer effects accurately. In\\u000a this paper, we present a parallel software framework for distributed memory architectures that implements the photon Monte\\u000a Carlo method of ray tracing to simulate

Ivana Veljkovic; Paul E. Plassmann

2005-01-01

88

Tracking B Cells from Two-Photon Microscopy Images Using Sequential Monte Carlo

\\u000a A python based software package that implements Sequential Monte Carlo (SMC) tracking is used for extracting dynamical information\\u000a of a large collection of individual cells from two-photon microscopy image sequences.We show how our software tool is useful\\u000a for quantifying the motility of B cells involved in immune response and for validating computational immunologic models. We\\u000a describe the theory behind our

David Olivieri; Ivan Gomez Conde; Jose Faro

89

New capabilities for Monte Carlo simulation of deuteron transport and secondary products generation

NASA Astrophysics Data System (ADS)

Several important research programs are dedicated to the development of facilities based on deuteron accelerators. In designing these facilities, the definition of a validated computational approach able to simulate deuteron transport and evaluate deuteron interactions and production of secondary particles with acceptable precision is a very important issue. Current Monte Carlo codes, such as MCNPX or PHITS, when applied for deuteron transport calculations use built-in semi-analytical models to describe deuteron interactions. These models are found unreliable in predicting neutron and photon generated by low energy deuterons, typically present in those facilities. We present a new computational tool, resulting from an extension of the MCNPX code, which improve significantly the treatment of problems where any secondary product (neutrons, photons, tritons, etc.) generated by low energy deuterons reactions could play a major role. Firstly, it handles deuteron evaluated data libraries, which allow describing better low deuteron energy interactions. Secondly, it includes a reduction variance technique for production of secondary particles by charged particle-induced nuclear interactions, which allow reducing drastically the computing time needed in transport and nuclear response calculations. Verification of the computational tool is successfully achieved. This tool can be very helpful in addressing design issues such as selection of the dedicated neutron production target and accelerator radioprotection analysis. It can be also helpful to test the deuteron cross-sections under development in the frame of different international nuclear data programs.

Sauvan, P.; Sanz, J.; Ogando, F.

2010-03-01

90

Optimization of Monte Carlo transport simulations in stochastic media

This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

Liang, C.; Ji, W. [Dept. of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Inst., 110 8th street, Troy, NY (United States)

2012-07-01

91

All-band photonic transport system and its device technologies

To open up new optical frequency resources for communications, a concept called all-band photonics is proposed. This concept focuses on 1-mum waveband photonic transmission and device technologies, thereby pioneering a new waveband for photonic transport systems (PTSs). To construct the 1-mum PTS, a novel semiconductor light-source, optical-fiber transmission lines, and optical amplifiers are developed. In this paper, we demonstrate a

Naokatsu Yamamoto; Hideyuki Sotobayashi

2009-01-01

92

The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media, thermal emission and absorption are almost completely balanced. A new formulation for thermal radiation transport, called the difference formulation, was recently introduced in order to remove the stiff balance between emission and absorption. In the new formulation, thermal emission is replaced by derivative terms that become small in thick media. It was proposed that the difficulties of solving the transport equation in thick media would be ameliorated by the difference formulation, while preserving full rigor and accuracy of the transport solution in the streaming limit. In this paper, the transport equation is solved by the symbolic implicit Monte Carlo method and comparisons are made between the standard formulation and the difference formulation. The method is easily adapted to the derivative source terms of the difference formulation, and a remarkable reduction in noise is obtained when the difference formulation is applied to problems involving thick media.

Brooks, Eugene D. [Lawrence Livermore National Laboratory, University of California, P.O. Box 808, Livermore, CA 94550 (United States)]. E-mail: brooks3@llnl.gov; McKinley, Michael Scott [Lawrence Livermore National Laboratory, University of California, P.O. Box 808, Livermore, CA 94550 (United States); Daffin, Frank [Lawrence Livermore National Laboratory, University of California, P.O. Box 808, Livermore, CA 94550 (United States); Szoeke, Abraham [Lawrence Livermore National Laboratory, University of California, P.O. Box 808, Livermore, CA 94550 (United States)

2005-05-20

93

Wavelength selectable photonic transport system applicable to unequal channel spacing

A photonic transport system with optoelectronic wavelength converter applicable to unequal channel spacing is demonstrated. A compact 8×8 wavelength selective switch and a polarization independent electroabsorption modulator are developed for designing the large-scale system

T. Kawai; M. Teshima; H. Yasaka; M. Kobayashi; M. Koga

1998-01-01

94

Photon-mediated electron transport in hybrid circuit-QED

NASA Astrophysics Data System (ADS)

We investigate photon-mediated transport processes in a hybrid circuit-QED structure consisting of two double quantum dots coupled to a common microwave cavity. Under suitable resonance conditions, electron transport in one double quantum dot is facilitated by the transport in the other dot via photon-mediated processes through the cavity. We calculate the average current in the quantum dots, the mean cavity photon occupation, and the current cross-correlations with both a full numerical simulation and a recursive perturbation scheme that allows us to include the influence of the cavity order-by-order in the couplings between the cavity and the quantum dot systems. We can then clearly identify the photon-mediated transport processes.

Lambert, Neill; Flindt, Christian; Nori, Franco

2013-07-01

95

Quantum Monte Carlo simulation of high-field electron transport: An application to silicon dioxide

A new approach to the Monte Carlo simulation of electron transport is presented. The Monte Carlo technique is regarded as a stochastic evaluation of the Green's function expressed as a Feynman path integral. By the proper weighting of the randomly generated trajectories, conventional Monte Carlo simulations can be used to obtain the correct quantum solution. This technique is applied to

Massimo V. Fischetti; D. J. Dimaria

1985-01-01

96

In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA

K. N. Yu; Joel Y. C. Cheung

2006-01-01

97

Few-photon transport in low-dimensional systems

We analyze the role of quantum interference effects induced by an embedded two-level system on the photon transport properties in waveguiding structures that exhibit cutoffs (band edges) in their dispersion relation. In particular, we demonstrate that these systems invariably exhibit single-particle photon-atom bound states and strong effective nonlinear responses on the few-photon level. Based on this, we find that the properties of these photon-atom bound states may be tuned via the underlying dispersion relation and that their occupation can be controlled via multiparticle scattering processes. This opens an interesting route for controlling photon transport properties in a number of solid-state-based quantum optical systems and the realization of corresponding functional elements and devices.

Longo, Paolo; Schmitteckert, Peter; Busch, Kurt [Institut fuer Theoretische Festkoerperphysik, Karlsruhe Institute of Technology, Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany); Institute of Nanotechnology, Karlsruhe Institute of Technology, D-76344 Eggenstein-Leopoldshafen (Germany); Institut fuer Theoretische Festkoerperphysik and DFG-Center for Functional Nanostructures, Karlsruhe Institute of Technology, Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany)

2011-06-15

98

Radiation transport in random disperse media implemented in the Monte Carlo code PRIZMA

NASA Astrophysics Data System (ADS)

The paper describes PRIZMA capabilities for modeling radiation transport in random disperse media by the Monte Carlo method. It proposes a method for simulating radiation transport in binary media with variable volume fractions.

Kashayeva, Elena A.; Malyshkin, Gennady N.; Mukhamadiev, Rim F.

2014-06-01

99

Electron transport through a quantum dot assisted by cavity photons

NASA Astrophysics Data System (ADS)

We investigate transient transport of electrons through a single quantum dot controlled by a plunger gate. The dot is embedded in a finite wire with length Lx assumed to lie along the x-direction with a parabolic confinement in the y-direction. The quantum wire, originally with hard-wall confinement at its ends, ±Lx/2, is weakly coupled at t = 0 to left and right leads acting as external electron reservoirs. The central system, the dot and the finite wire, is strongly coupled to a single cavity photon mode. A non-Markovian density-matrix formalism is employed to take into account the full electron-photon interaction in the transient regime. In the absence of a photon cavity, a resonant current peak can be found by tuning the plunger-gate voltage to lift a many-body state of the system into the source-drain bias window. In the presence of an x-polarized photon field, additional side peaks can be found due to photon-assisted transport. By appropriately tuning the plunger-gate voltage, the electrons in the left lead are allowed to undergo coherent inelastic scattering to a two-photon state above the bias window if initially one photon was present in the cavity. However, this photon-assisted feature is suppressed in the case of a y-polarized photon field due to the anisotropy of our system caused by its geometry.

Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

2013-11-01

100

The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space. PMID:20935713

Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

2010-10-10

101

Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other

Zi-Wei Lin; James Adams; Abdulnasser Barghouty; Sharmalee Randeniya; Ram Tripathi; John Watts; Pablo Yepes

2010-01-01

102

Controlling single-photon transport with three-level quantum dots in photonic crystals

NASA Astrophysics Data System (ADS)

We investigate how to control single-photon transport along the photonic crystal waveguide with the recent experimentally demonstrated artificial atoms [i.e., ?-type quantum dots (QDs)] [S. G. Carter et al., Nat. Photon. 7, 329 (2013), 10.1038/nphoton.2013.41] in an all-optical way. Adopting full quantum theory in real space, we analytically calculate the transport coefficients of single photons scattered by a ?-type QD embedded in single- and two-mode photonic crystal cavities (PCCs), respectively. Our numerical results clearly show that the photonic transmission properties can be exactly manipulated by adjusting the coupling strengths of waveguide-cavity and QD-cavity interactions. Specifically, for the PCC with two degenerate orthogonal polarization modes coupled to a ?-type QD with two degenerate ground states, we find that the photonic transmission spectra show three Rabi-splitting dips and the present system could serve as single-photon polarization beam splitters. The feasibility of our proposal with the current photonic crystal technique is also discussed.

Yan, Cong-Hua; Jia, Wen-Zhi; Wei, Lian-Fu

2014-03-01

103

Parallelization of a Monte Carlo particle transport simulation code

NASA Astrophysics Data System (ADS)

We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

2010-05-01

104

Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

NASA Astrophysics Data System (ADS)

Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.

Malladi, Mayank

105

NASA Astrophysics Data System (ADS)

Three-dimensional Monte Carlo coupled electron-photon-positron transport calculations are often performed to determine characteristics such as energy or charge deposition in a wide range of systems exposed to radiation field such as electronic circuitry in a space-environment, tissues exposed to radiotherapy linear accelerator beams, or radiation detectors. Modeling these systems constitute a challenging problem for the available computational methods and resources because they can involve; (i) very large attenuation, (ii) large number of secondary particles due to the electron-photon-positron cascade, and (iii) large and highly forward-peaked scattering. This work presents a new automated variance reduction technique, referred to as ADEIS (Angular adjoint-Driven Electron-photon-positron Importance Sampling), that takes advantage of the capability of deterministic methods to rapidly provide approximate information about the complete phase-space in order to automatically evaluate variance reduction parameters. More specifically, this work focuses on the use of discrete ordinates importance functions to evaluate angular transport and collision biasing parameters, and use them through a modified implementation of the weight-window technique. The application of this new method to complex Monte Carlo simulations has resulted in speedups as high as five orders of magnitude. Due to numerical difficulties in obtaining physical importance functions devoid of numerical artifacts, a limited form of smoothing was implemented to complement a scheme for automatic discretization parameters selection. This scheme improves the robustness, efficiency and statistical reliability of the methodology by optimizing the accuracy of the importance functions with respect to the additional computational cost from generating and using these functions. It was shown that it is essential to bias different species of particles with their specific importance functions. In the case of electrons and positrons, even though the physical scattering and energy-loss models are similar, the importance of positrons can be many orders of magnitudes larger than electron importance. More specifically, not explicitly biasing the positrons with their own set of importance functions results in an undersampling of the annihilation photons and, consequently, introduces a bias in the photon energy spectra. It was also shown that the implementation of the weight-window technique within the condensed-history algorithm of a Monte Carlo code requires that the biasing be performed at the end of each major energy step. Applying the weight-window earlier into the step, i.e., before the last substep, will result in a biased electron energy spectrum. This bias is a consequence of systematic errors introduced in the energy-loss prediction due to an inappropriate application of the weight-window technique where the actual path-length differs from the pre-determined path-length used for evaluating the energy-loss straggling distribution.

Dionne, Benoit

106

Monte Carlo-based revised values of dose rate constants at discrete photon energies

Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30–50 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20–50 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%.

Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju

2014-01-01

107

The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries. PMID:21147784

Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K

2011-11-01

108

Monte Carlo-based energy response studies of diode dosimeters in radiotherapy photon beams.

This study presents Monte Carlo-calculated absolute and normalized (relative to a (60)Co beam) sensitivity values of silicon diode dosimeters for a variety of commercially available silicon diode dosimeters for radiotherapy photon beams in the energy range of (60)Co-24 MV. These values were obtained at 5 cm depth along the central axis of a water-equivalent phantom of 10 cm × 10 cm field size. The Monte Carlo calculations were based on the EGSnrc code system. The diode dosimeters considered in the calculations have different buildup materials such as aluminum, brass, copper, and stainless steel + epoxy. The calculated normalized sensitivity values of the diode dosimeters were then compared to previously published measured values for photon beams at (60)Co-20 MV. The comparison showed reasonable agreement for some diode dosimeters and deviations of 5-17 % (17 % for the 3.4 mm brass buildup case for a 10 MV beam) for some diode dosimeters. Larger deviations of the measurements reflect that these models of the diode dosimeter were too simple. The effect of wall materials on the absorbed dose to the diode was studied and the results are presented. Spencer-Attix and Bragg-Gray stopping power ratios (SPRs) of water-to-diode were calculated at 5 cm depth in water. The Bragg-Gray SPRs of water-to-diode compare well with Spencer-Attix SPRs for ? = 100 keV and above at all beam qualities. PMID:23180010

Arun, C; Palani Selvam, T; Dinkar, Verma; Munshi, Prabhat; Kalra, Manjit Singh

2013-01-01

109

NASA Astrophysics Data System (ADS)

A photon-cell interactive Monte Carlo (pciMC) that tracks photon migration in both the extra- and intracellular spaces is developed without using macroscopic scattering phase functions and anisotropy factors, as required for the conventional Monte Carlos (MCs). The interaction of photons at the plasma-cell boundary of randomly oriented 3-D biconcave red blood cells (RBCs) is modeled using the geometric optics. The pciMC incorporates different photon velocities from the extra- to intracellular space, whereas the conventional MC treats RBCs as points in the space with a constant velocity. In comparison to the experiments, the pciMC yielded the mean errors in photon migration time of 9.8+/-6.8 and 11.2+/-8.5% for suspensions of small and large RBCs (RBCsmall, RBClarge) averaged over the optically diffusing region from 2000 to 4000 ?m, while the conventional random walk Monte Carlo simulation gave statistically higher mean errors of 19.0+/-5.8 ( p < 0.047) and 21.7+/-19.1% (p < 0.055), respectively. The gradients of optical density in the diffusing region yielded statistically insignificant differences between the pciMC and experiments with the mean errors between them being 1.4 and 0.9% in RBCsmall and RBClarger, respectively. The pciMC based on the geometric optics can be used to accurately predict photon migration in the optically diffusing, turbid medium.

Sakota, Daisuke; Takatani, Setsuo

2010-11-01

110

Confidence interval procedures for Monte Carlo transport simulations

The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s{sup 2} are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP.

Pederson, S.P. [Georgia Inst. of Tech., Atlanta, GA (United States). School of Industrial and Systems Engineering; Forster, R.A.; Booth, T.E. [Los Alamos National Lab., NM (United States)

1997-09-01

111

Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.

Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, CHUQ Pavillon L'Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Department of Oncology, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)

2011-03-15

112

NASA Astrophysics Data System (ADS)

The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.

Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.

2008-09-01

113

The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C(1) and C(2) and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population. PMID:18723929

Nettelbeck, H; Takacs, G J; Rosenfeld, A B

2008-09-21

114

Progress Towards Optimally Efficient Schemes for Monte Carlo Thermal Radiation Transport

In this summary we review the complementary research being undertaken at AWE and LLNL aimed at developing optimally efficient algorithms for Monte Carlo thermal radiation transport based on the difference formulation. We conclude by presenting preliminary results on the application of Newton-Krylov methods for solving the Symbolic Implicit Monte Carlo (SIMC) energy equation.

Smedley-Stevenson, R P; Brooks III, E D

2007-09-26

115

3-D Combinatorial Geometry in the MERCURY Monte Carlo Particle Transport Code

Lawrence Livermore National Laboratory is in the process of developing a new Monte Carlo Particle Transport code named MERCURY. This new code features a 3-D Combinatorial Geometry tracking algorithm. This paper details some of the characteristics of this Monte Carlo tracker

Greenman, G

2004-09-27

116

Simulation is often used to predict the response of gamma-ray spectrometers in technology viability and comparative studies for homeland and national security scenarios. Candidate radiation transport methods generally fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are the most heavily used in the detection community and are particularly effective for calculating pulse-height spectra

Leon E. Smith; Christopher J. Gesh; Richard T. Pagh; Erin A. Miller; Mark W. Shaver; Eric D. Ashbaker; Michael T. Batdorf; J. Edward Ellis; William R. Kaye; Ronald J. McConn; George H. Meriwether; Jennifer J. Ressler; Andrei B. Valsan; Todd A. Wareing

2008-01-01

117

Photon mediated transport and crystallization in optically driven Rydberg gases

NASA Astrophysics Data System (ADS)

We show that excitations in a gas of atoms driven to Rydberg states by near-resonant laser radiation in a two-photon coupling scheme experience a photon mediated transport. Thus even if the center-of-mass motion of the atoms can be neglected, this results in a kinetic Hamiltonian for the Rydberg excitations. The corresponding mass is identical to that of the dark-state polaritons of the optical coupling scheme. The kinetic energy competes with the Rydberg dipole-dipole interactions and can prevent the formation of quasi-crystal structures. Using DMRG simulations we calculate the Luttinger parameter for a one-dimensional gas of resonantly driven Rydberg atoms taking into account the photon mediated transport and derive conditions under which quasi-crystallization can be observed.

Otterbach, Johannes; Lauer, Achim; Muth, Dominik; Fleischhauer, Michael

2012-06-01

118

Monte Carlo Simulation of the Benchmark Experiment on Neutron Transport in Thick Sodium.

National Technical Information Service (NTIS)

Monte Carlo simulation of the benchmark experiment on neutron transport through thick sodium has been taken up to validate some of the modified variance reduction devices and random sampling techniques developed here. The transmitted neutron spectrum meas...

I. Murthy K. P. N. Murthy

1981-01-01

119

Memory Bottlenecks and Memory Contention in Multi-Core Monte Carlo Transport Codes

NASA Astrophysics Data System (ADS)

We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures.

Tramm, John R.; Siegel, Andrew R.

2014-06-01

120

National Technical Information Service (NTIS)

The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameter...

J. L. Maconald E. D. Cashwell

1978-01-01

121

Monte Carlo-based revised values of dose rate constants at discrete photon energies.

Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30-50 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20-50 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166

Selvam, T Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D Appala Raju

2014-01-01

122

Photon propagation correction in 3D photoacoustic image reconstruction using Monte Carlo simulation

NASA Astrophysics Data System (ADS)

Purpose: The purpose of this study is to develop a new 3-D iterative Monte Carlo algorithm to recover the heterogeneous distribution of molecular absorbers with a solid tumor. Introduction: Spectroscopic imaging (PCT-S) has the potential to identify a molecular species and quantify its concentration with high spatial fidelity. To accomplish this task, tissue attenuation losses during photon propagation in heterogeneous 3D objects is necessary. An iterative recovery algorithm has been developed to extract 3D heterogeneous parametric maps of absorption coefficients implementing a MC algorithm based on a single source photoacoustic scanner and to determine the influence of the reduced scattering coefficient on the uncertainty of recovered absorption coefficient. Material and Methods: This algorithm is tested for spheres and ellipsoids embedded in simulated mouse torso with optical absorption values ranging from 0.01-0.5/cm, for the same objects where the optical scattering is unknown (?s'=7-13/cm), and for a heterogeneous distribution of absorbers. Results: Systemic and statistical errors in ma with a priori knowledge of ?s' and g are <2% (sphere) and <4% (ellipsoid) for all ma and without a priori knowledge of ms' is <3% and <6%. For heterogenenous distributions of ma, errors are <4% and <5.5% for each object with a prior knowledge of ms' and g, and to 7 and 14% when ?s' varied from 7-13/cm. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in simulated objects consistent with tumors.

Cheong, Yaw Jye; Stantz, Keith M.

2010-02-01

123

Monte Carlo Modeling of Electron Transport in Repeated Overshoot Structures,

National Technical Information Service (NTIS)

Repeated velocity overshoot has been proposed as a way of obtaining high average velocities over significant distances in semiconductor devices. The potential of this concept is examined using a fully self-consistent particle-field Monte Carlo simulation....

G. I. Haddad T. L. Crandle J. R. East P. A. Blakey

1989-01-01

124

We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights.

Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan

2014-01-01

125

SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

NASA Astrophysics Data System (ADS)

Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

2014-03-01

126

A macro Monte Carlo method for electron beam dose calculations

The macro Monte Carlo (MMC) method is a fast Monte Carlo (MC) algorithm for high energy electron transport in an absorbing medium. Incident electrons are transported in large-scale macroscopic steps through the absorber. Electron parameters after each step are calculated from probability distributions. Transport of secondary electrons and bremsstrahlung photons is taken into account. For electron beam dose calculations, the

H. Neuenschwander; E. J. Born

1992-01-01

127

Monte Carlo calculations of pressure vessel (PV) neutron fluence have been performed to benchmark discrete ordinates (SN) transport methods. These calculations, along with measured data at the ex-vessel cavity dosimeter, provide a means to examine various uncertainties associated with the SN transport calculations. For the purpose of the PV fluence calculations, synthesized 3-D deterministic models are shown to produce results

J. C. Wagner; A. Haghighat; B. G. Petrovic; H. L. Hanshaw

128

Monte Carlo transient phonon transport in silicon and germanium at nanoscales

Heat transport at nanoscales in semiconductors is investigated with a statistical method. The Boltzmann transport equation (BTE), which characterizes phonon motion and interaction within the crystal lattice, has been simulated with a Monte Carlo technique. Our model takes into account media frequency properties through the dispersion curves for longitudinal and transverse acoustic branches. The BTE collisional term involving phonon scattering

David Lacroix; Karl Joulain; Denis Lemonnier

2005-01-01

129

LDRD project 151362 : low energy electron-photon transport.

At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.

Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James

2013-09-01

130

NASA Astrophysics Data System (ADS)

In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.

Wang, Mengkuo

131

Photon-assisted electron transport in graphene: Scattering theory analysis

Photon-assisted electron transport in ballistic graphene is analyzed using scattering theory. We show that the presence of an ac signal (applied to a gate electrode in a region of the system) has interesting consequences on electron transport in graphene, where the low energy dynamics is described by the Dirac equation. In particular, such a setup describes a feasible way to probe energy dependent transmission in graphene. This is of substantial interest because the energy dependence of transmission in mesoscopic graphene is the basis of many peculiar transport phenomena proposed in the recent literature. Furthermore, we discuss the relevance of our analysis of ac transport in graphene to the observability of zitterbewegung of electrons that behave as relativistic particles (but with a lower effective speed of light)

Trauzettel, B. [Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden (Netherlands); Department of Physics and Astronomy, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Blanter, Ya. M.; Morpurgo, A. F. [Kavli Institute of Nanoscience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft (Netherlands)

2007-01-15

132

Time-dependent transport of electrons through a photon cavity

NASA Astrophysics Data System (ADS)

We use a non-Markovian master equation to describe the transport of Coulomb-interacting electrons through an electromagnetic cavity with one quantized photon mode. The central system is a finite-parabolic quantum wire that is coupled weakly to external parabolic quasi-one-dimensional leads at t=0. With a stepwise introduction of complexity to the description of the system and a corresponding stepwise truncation of the ensuing many-body spaces, we are able to describe the time-dependent transport of Coulomb-interacting electrons through a geometrically complex central system. We take the full electromagnetic interaction of electrons and cavity photons without resorting to the rotating-wave approximation or reduction of the electron states to two levels into account. We observe that the number of initial cavity photons and their polarizations can have important effects on the transport properties of the system. The quasiparticles formed in the central system have lifetimes limited by the coupling to the leads and radiation processes active on a much longer time scale.

Gudmundsson, Vidar; Jonasson, Olafur; Tang, Chi-Shung; Goan, Hsi-Sheng; Manolescu, Andrei

2012-02-01

133

TRIPOLI-4.3 Monte Carlo transport code has been used to evaluate the QUADOS (Quality Assurance of Computational Tools for Dosimetry) problem P4, neutron and photon response of an albedo-type thermoluminescence personal dosemeter (TLD) located on an ISO slab phantom. Two enriched 6LiF and two 7LiF TLD chips were used and they were protected, in front or behind, with a boron-loaded dosemeter-holder. Neutron response of the four chips was determined by counting 6Li(n,t)4He events using ENDF/B-VI.4 library and photon response by estimating absorbed dose (MeV g(-1)). Ten neutron energies from thermal to 20 MeV and six photon energies from 33 keV to 1.25 MeV were used to study the energy dependence. The fraction of the neutron and photon response owing to phantom backscatter has also been investigated. Detailed TRIPOLI-4.3 solutions are presented and compared with MCNP-4C calculations. PMID:16381740

Lee, Y K

2005-01-01

134

We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software.

Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.

2013-01-01

135

We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle's traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

Glaser, Adam K; Kanick, Stephen C; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W

2013-05-01

136

PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

Iandola, F N; O'Brien, M J; Procassini, R J

2010-11-29

137

Purpose: Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. Methods: For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Results: Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons.Conclusion: Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

Yoriyaz, Helio; Moralles, Mauricio; Tarso Dalledone Siqueira, Paulo de; Costa Guimaraes, Carla da; Belonsi Cintra, Felipe; Santos, Adimir dos [Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN/SP, Sao Paulo 05508-000 (Brazil)

2009-11-15

138

Monte Carlo Simulation Of H{sup -} Ion Transport

In this work we study in detail the kinetics of H{sup -} ion swarms in velocity space: this provides a useful contrast to the usual literature in the field, where device features in configuration space are often included in detail but kinetic distributions are only marginally considered. To this aim a Monte Carlo model is applied, which includes several collision processes of H{sup -} ions with neutral particles as well as Coulomb collisions with positive ions. We characterize the full velocity distribution i.e. including its anisotropy, for different values of E/N, the atomic fraction and the H{sup +} mole fraction, which makes our results of interest for both source modeling and beam formation. A simple analytical theory, for highly dissociated hydrogen is formulated and checked by Monte Carlo calculations.

Diomede, P. [Dipartimento di Chimica dell'Universita di Bari, Via Orabona 4, 70126 Bari (Italy); Longo, S.; Capitelli, M. [Dipartimento di Chimica dell'Universita di Bari, Via Orabona 4, 70126 Bari (Italy); IMIP/CNR, Bari Section, Via Orabona 4, 70126 Bari (Italy)

2009-03-12

139

Monte Carlo modelling of positron transport in real world applications

NASA Astrophysics Data System (ADS)

Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

Marjanovi?, S.; Bankovi?, A.; Šuvakov, M.; Petrovi?, Z. Lj

2014-05-01

140

The FERMI-Elettra FEL Photon Transport System

The FERMI-Elettra free electron laser (FEL) user facility is under construction at Sincrotrone Trieste (Italy), and it will be operative in late 2010. It is based on a seeded scheme providing an almost perfect transform-limited and fully spatially coherent photon beam. FERMI-Elettra will cover the wavelength range 100 to 3 nm with the fundamental harmonics, and down to 1 nm with higher harmonics. We present the layout of the photon beam transport system that includes: the first common part providing on-line and shot-to-shot beam diagnostics, called PADReS (Photon Analysis Delivery and Reduction System), and 3 independent beamlines feeding the experimental stations. Particular emphasis is given to the solutions adopted to preserve the wavefront, and to avoid damage on the different optical elements. Peculiar FEL devices, not common in the Synchrotron Radiation facilities, are described in more detail, e.g. the online photon energy spectrometer measuring shot-by-shot the spectrum of the emitted radiation, the beam splitting and delay line system dedicated to cross/auto correlation and pump-probe experiments, and the wavefront preserving active optics adapting the shape and size of the focused spot to meet the needs of the different experiments.

Zangrando, M. [Laboratorio TASC INFM-CNR, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Cudin, I.; Fava, C.; Godnig, R.; Kiskinova, M.; Masciovecchio, C.; Parmigiani, F.; Rumiz, L.; Svetina, C.; Turchet, A.; Cocco, D. [Sincrotrone Trieste SCpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy)

2010-06-23

141

National Technical Information Service (NTIS)

The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not...

R. Seidel

1995-01-01

142

Estimation of gamma- and X-ray photons buildup factor in soft tissue with Monte Carlo method

Buildup factor of gamma- and X-ray photons in the energy range 0.2–2MeV in water and soft tissue is computed using Monte Carlo method. The results are compared with the existing buildup factor data of pure water. The difference between soft tissue and water buildup factor is studied. Soft tissue is assumed to have a composition as H63C6O28N. The importance of

Dariush Sardari; Ali Abbaspour; Samaneh Baradaran; Farshid Babapour

2009-01-01

143

Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

NASA Astrophysics Data System (ADS)

For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

144

NASA Astrophysics Data System (ADS)

This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

2010-08-01

145

A novel method to accelerate Monte Carlo (MC) simulations of photon migration in turbid media is presented. It is specifically suited for transillumination studies in slab geometries including some deep inhomogeneity. Propagation up to the inhomogeneity, at a given depth S1, is replaced by theoretical calculations using well established models. Then, photon propagation is continued inside the complete slab using

Héctor O. Di Rocco; Daniela I. Iriarte; Juan A. Pomarico; Héctor F. Ranea-Sandoval

2009-01-01

146

NASA Astrophysics Data System (ADS)

The variations of depth and surface dose on the bone heterogeneity and beam angle were compared between unflattened and flattened photon beams using Monte Carlo simulations. Phase-space files of the 6 MV photon beams with field size of 10×10 cm2 were generated with and without the flattening filter based on a Varian TrueBeam linac. Depth and surface doses were calculated in a bone and water phantoms using Monte Carlo simulations (the EGSnrc-based code). Dose calculations were repeated with angles of the unflattened and flattened beams turned from 0° to 15°, 30°, 45°, 60°, 75° and 90° in the bone and water phantoms. Monte Carlo results of depth doses showed that compared to the flattened beam the unflattened photon beam had a higher dose in the build-up region but lower dose beyond the depth of maximum dose. Dose ratios of the unflattened to flattened beams were calculated in the range of 1.6-2.6 with beam angle varying from 0° to 90° in water. Similar results were found in the bone phantom. In addition, higher surface doses of about 2.5 times were found with beam angles equal to 0° and 15° in the bone and water phantoms. However, surface dose deviation between the unflattened and flattened beams became smaller with increasing beam angle. Dose enhancements due to the bone backscatter were also found at the water-bone and bone-water interfaces for both the unflattened and flattened beams in the bone phantom. With Monte Carlo beams cross-calibrated to the monitor unit in simulations, variations of depth and surface dose on the bone heterogeneity and beam angle were investigated and compared using Monte Carlo simulations. For the unflattened and flattened photon beams, the surface dose and range of depth dose ratios (unflattened to flattened beam) decreased with increasing beam angle. The dosimetric comparison in this study is useful in understanding the characteristics of unflattened photon beam on the depth and surface dose with bone heterogeneity.

Chow, James C. L.; Owrangi, Amir M.

2014-08-01

147

Methods of Monte Carlo electron transport in particle-in-cell codes

An algorithm has been implemented in CCUBE and ISIS to treat electron transport in materials using a Monte Carlo method in addition to the electron dynamics determined by the self-consistent electromagnetic, relativistic, particle-in-cell simulation codes that have been used extensively to model generation of electron beams and intense microwave production. Incorporation of a Monte Carlo method to model the transport of electrons in materials (conductors and dielectrics) in a particle-in-cell code represents a giant step toward realistic simulation of the physics of charged-particle beams. The basic Monte Carlo method used in the implementation includes both scattering of electrons by background atoms and energy degradation.

Kwan, T.J.T.; Snell, C.M.

1985-01-01

148

Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields

NASA Astrophysics Data System (ADS)

The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors \\Omega ^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}. The aim of this study is the determination of the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}; this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} is of the order of 1.2 at a field size of 1 × 1 cm2 for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm2, but below this field size there is a steep decrease of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm2, i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} is closest to unity in small fields and mainly independent of the electron beam spot size.

Czarnecki, D.; Zink, K.

2013-04-01

149

Testing Monte Carlo computer codes for simulations of electron transport in matter

In this paper, three Monte Carlo codes were tested for electron transport in various materials. MCNPX (version 2.4.0), Penelope (version 2003) and EGSnrc codes were used for modeling simple problems. These problems were focused on bremsstrahlung, energy deposition in matter, electron ranges and production of secondary electrons by gamma radiation. The electrons were primary particles, except in the last exercise,

V?ra Šídlová; Tomáš Trojek

2010-01-01

150

Monte Carlo Study of Ambipolar Transport and Quantum Effects in Carbon Nanotube Transistors

In this paper, we investigate the device operation of CNFETs using Monte Carlo simulation. Two types of contacts (ohmic and Schottky) are considered and the effect of ambipolar transport with Schottky barriers is analysed. We also discuss the actual influence of quantum effects on the basis of Wigner simulation results. The output and high-frequency characteristics of different structures are presented

Huu Nha Nguyen; Sylvie Retailleau; Damien Querlioz; Arnaud Bournel; Philippe Dollfus

2009-01-01

151

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain,

K P Keady; P Brantley

2010-01-01

152

A benchmark comparison of Monte Carlo particle transport algorithms for binary stochastic mixtures

We numerically investigate the accuracy of two Monte Carlo algorithms originally proposed by Zimmerman [1] and Zimmerman and Adams [2] for particle transport through binary stochastic mixtures. We assess the accuracy of these algorithms using a standard suite of planar geometry incident angular flux benchmark problems and a new suite of interior source benchmark problems. In addition to comparisons of

Patrick S. Brantley

2011-01-01

153

Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code

Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single

M J OBrien; K I Joy; R J Procassini; G M Greenman

2008-01-01

154

Monte Carlo evaluation of the transport coefficients in a n+ – n – n+ silicon diode

Hydrodynamic-like models are commonly used for describing carrier transport in semiconductor devices. One major problem of this formulation is how to model the production terms. In this paper the relaxation-time approximation and the moments expansion of the production terms are checked with Monte Carlo simulations for a one dimensional n+ – n – n+ silicon diode in the spherical parabolic

Orazio Muscato

2000-01-01

155

Update On the Status of the FLUKA Monte Carlo Transport Code*

NASA Technical Reports Server (NTRS)

The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.

Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.

2006-01-01

156

Implicitly causality enforced solution of multidimensional transient photon transport equation.

A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided. PMID:20052050

Handapangoda, Chintha C; Premaratne, Malin

2009-12-21

157

MC++: A parallel, portable, Monte Carlo neutron transport code in C++

MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A & M Univ., College Station, TX (United States)

1997-03-01

158

NASA Astrophysics Data System (ADS)

MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

2014-06-01

159

Time series analysis of Monte Carlo neutron transport calculations

NASA Astrophysics Data System (ADS)

A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.

Nease, Brian Robert

160

National Technical Information Service (NTIS)

Experimental data of VHTRC (Very High Temperature Reactor Critical Assembly) were analyzed using Monte Carlo code MVP (general purpose Monte Carlo code of neutron and photon transport calculations based on the continuous energy method) . The calculation a...

N. Nojiri K. Yamashita N. Fiujimoto M. Nakano F. Akino

1997-01-01

161

GGRESRC: A Monte Carlo generator for the two-photon process e+e-?e+e-R(J=0) in the single-tag mode

NASA Astrophysics Data System (ADS)

The Monte Carlo event generator GGRESRC is described. The generator is developed for the simulation of events of the two-photon process e+e-?e+e-R, where R is a pseudoscalar resonance, ?0, ?, ??, ?c, or ?b. The program is optimized for the generation of two-photon events in the single-tag mode. For single-tag events, radiative correction simulation is implemented in the generator including photon emission from the initial and final states.

Druzhinin, V. P.; Kardapoltsev, L. V.; Tayursky, V. A.

2014-01-01

162

Monte Carlo path sampling approach to modeling aeolian sediment transport

NASA Astrophysics Data System (ADS)

Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

Hardin, E. J.; Mitasova, H.; Mitas, L.

2011-12-01

163

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

2008-10-31

164

A Comparison of Monte Carlo Particle Transport Algorithms for Binary Stochastic Mixtures

Two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through a binary stochastic mixture are numerically compared using a standard set of planar geometry benchmark problems. In addition to previously-published comparisons of the ensemble-averaged probabilities of reflection and transmission, we include comparisons of detailed ensemble-averaged total and material scalar flux distributions. Because not all benchmark scalar flux distribution data used to produce plots in previous publications remains available, we have independently regenerated the benchmark solutions including scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically-realistic scalar flux distributions for the transport problems examined. The first algorithm reproduces the standard Levermore-Pomraning model results for the probabilities of reflection and transmission. The second algorithm generally produces significantly more accurate probabilities of reflection and transmission and also significantly more accurate total and material scalar flux distributions.

Brantley, P S

2009-02-23

165

In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

Naff, R. L.; Haley, D. F.; Sudicky, E. A.

1998-01-01

166

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

167

NASA Astrophysics Data System (ADS)

This study investigated the variations of the dose and dose distribution in a small-animal irradiation due to the photon beam energy and presence of inhomogeneity. Based on the same mouse computed tomography image set, three Monte Carlo phantoms namely, inhomogeneous, homogeneous and bone-tissue phantoms were used in this study. These phantoms were generated by overriding the relative electron density of no voxel (inhomogeneous), all voxel (homogeneous) and the bone voxel (bone-tissue) to one. 360° photon arcs with beam energies of 50-1250 kV were used in mouse irradiations. Doses in the above phantoms were calculated using the EGSnrc-based DOSXYZnrc code through the DOSCTP. It was found that the dose conformity increased with the increase of the photon beam energy from the kV to MV range. For the inhomogeneous mouse phantom, increasing the photon beam energy from 50 kV to 1250 kV increased about 21 times the dose deposited at the isocenter. For the bone dose enhancement, the mean dose was 1.4 times higher when the bone inhomogeneity was not neglected using the 50 kV photon beams in the mouse irradiation. Bone dose enhancement affecting the mean dose in the mouse irradiation can be found in the photon beams with energy range of 50-200 kV, and the dose enhancement decreases with an increase of the beam energy. Moreover, the MV photon beam has a higher dose at the isocenter, and a better dose conformity compared to the kV beam.

Chow, James C. L.

2013-05-01

168

Estimation of gamma- and X-ray photons buildup factor in soft tissue with Monte Carlo method.

Buildup factor of gamma- and X-ray photons in the energy range 0.2-2MeV in water and soft tissue is computed using Monte Carlo method. The results are compared with the existing buildup factor data of pure water. The difference between soft tissue and water buildup factor is studied. Soft tissue is assumed to have a composition as H(63)C(6)O(28)N. The importance of such work arises from the fact that in medical applications of X- and gamma-ray, soft tissue is usually approximated by water. It is shown that the difference between water and soft tissue buildup factor is usually more than 10%. On the other hand, buildup factor in water resulted from Monte Carlo method is compared with the experimental data appearing in references. It seems there is around 10% error in the reference data as well. PMID:19362488

Sardari, Dariush; Abbaspour, Ali; Baradaran, Samaneh; Babapour, Farshid

2009-01-01

169

Data decomposition of Monte Carlo particle transport simulations via tally servers

An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)

2013-11-01

170

A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport

NASA Astrophysics Data System (ADS)

The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48

Robinson, P. B.; Peterson, J. D. L.

2005-12-01

171

Correlated few-photon transport in one-dimensional waveguides: Linear and nonlinear dispersions

We address correlated few-photon transport in one-dimensional waveguides coupled to a two-level system (TLS), such as an atom or a quantum dot. We derive exactly the single-photon and two-photon current (transmission) for linear and nonlinear (tight-binding sinusoidal) energy-momentum dispersion relations of photons in the waveguides and compare the results for the different dispersions. A large enhancement of the two-photon current for the sinusoidal dispersion has been seen at a certain transition energy of the TLS away from the single-photon resonances.

Roy, Dibyendu [Department of Physics, University of California, San Diego, La Jolla, California 92093-0319 (United States)

2011-04-15

172

Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.

Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Physics and Ontario Cancer Institute, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

2010-10-15

173

''Hybrid'' Monte Carlo simulation of ripple transport in stellarators

A rapid simulation has been developed to accurately describe ripple effects on particle motion in stellarators at low collision frequencies. The majority of ripple trapped particles are followed through the iterative conservation of their bounce action J, while those particles in the region of phase space on either side of the ripple trap/detrap boundary are followed using guiding center equations. This ''hybrid'' formulation provides the most accurate numerical description of ripple trapped particle orbits to date, short of a fully guiding center code. This is important since these orbits are often substantially different from those allowed for in analytic theory and in other transport codes. Further, the hybrid simulation is much faster than a fully guiding center treatment, making all collision frequency regimes of interest accessible at reasonable costs of computer time. The methods employed allow the examination of stellarator configurations for which the magnitude of the toroidal well, epsilon/sub t/, is larger than that of the helical well, epsilon/sub h/, as well as configurations more often treated in which epsilon/sub h/ > epsilon/sub t/. Results are obtained for the usual analytic model of the helical ripple, epsilon/sub h/ = epsilon/sub h/(r), as well as a more realistic model for which epsilon/sub h/ = epsilon/sub h/(r,theta). The results are largely explainable in terms of existing analytic theories, although some slight modifications seem to be necessary.

Beidler, C.D.; Hitchon, W.N.G.; Shohet, J.L.

1986-02-01

174

Monte Carlo-drift-diffusion simulation of electron current transport in III-N LEDs

NASA Astrophysics Data System (ADS)

Performance of III-N based solid-state lighting is to a large extent limited by current transport effects that are also expected to contribute to the efficiency droop in real devices. To enable studying the contributions of electron transport in drooping more accurately, we develop and study a coupled Monte Carlo-drift-diffusion (MCDD) method to model the details of electron current transport in III-N optoelectronic devices. In the MCDD method, electron and hole distributions are first simulated by solving the standard drift-diffusion (DD) equations. The hole density and recombination rate density obtained from solving the DD equations are used as inputs in the Monte Carlo (MC) simulation of the electron system. The MC simulation involves solving the Boltzmann transport equation for the electron gas to accurately describe electron transport. As a hybrid of the DD and MC methods, the MCDD represents a first-order correction for electron transport in III-N LEDs as compared to DD, predicting a significant hot electron population in the simulated multi-quantum well (MQW) LED device at strong injection.

Kivisaari, Pyry; Sadi, Toufik; Oksanen, Jani; Tulkki, Jukka

2014-03-01

175

Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport

Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.

Lynch, J.E.

1985-01-01

176

Minimizing the cost of splitting in Monte Carlo radiation transport simulation

A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

Juzaitis, R.J.

1980-10-01

177

Charge transport in aSi:H detectors: comparison of analytical and Monte Carlo simulations

To understand the signal formation in hydrogenated amorphous silicon (a-Si:H) p-i-n detectors, dispersive charge transport due to multiple trapping in a-Si:H tail states is studied both analytically and by Monte Carlo simulations. An analytical solution is found for the free electron and hole distributions n(x,t) and the transient current I(t) due to an initial electron-hole pair generated at an arbitrary

L.-A. Hamel; W. C. Chen

1995-01-01

178

Charge transport in aSi:H detectors: comparison of analytical and Monte Carlo simulations

To understand the signal formation in hydrogenated amorphous silicon (a-Si:H) p-i-n detectors, dispersive charge transport due to multiple trapping in a-Si:H tail states is studied both analytically and by Monte Carlo simulations. An analytical solution is found for the free electron and hole distributions n(x,t) and the transient current I(t) due to an initial electron-hole pair generated at an arbitrary

L.-A. Hamel; Wen Chao Chen

1994-01-01

179

Monte Carlo simulation of negative ion transport in the negative ion source (Camembert III)

Transport process of negative hydrogen ions (H-) in a large hybrid multicusp H- source, ``Camembert III,'' has been analyzed by a three-dimensional Monte Carlo simulation code. The realistic geometry and multicusp magnetic-field configuration are taken into account. Various important destruction processes of H- and Coulomb collision with background plasma are also included in the model. Both the volume- and surface-produced

T. Sakurabayashi; A. Hatayama; K. Miyamoto; M. Ogasawara; M. Bacal

2002-01-01

180

A hollow cathode discharge (HCD) in He is studied based on a Monte Carlo-fluid hybrid model combined with a transport model for metastable He atoms. The Monte Carlo model describes the movement of fast electrons as particles, while in the fluid model, the slow electrons and positive ions are treated as a continuum. The continuity equations are solved together with

N. Baguer; A. Bogaerts; R. Gijbels

2003-01-01

181

NASA Astrophysics Data System (ADS)

An electron-photon coupled Monte Carlo code ARCHER -

Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

2014-06-01

182

Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update

NASA Technical Reports Server (NTRS)

Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.

Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.

2003-01-01

183

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01

184

A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)

1997-05-01

185

Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level.

Wang, Lilie L. W.; Klein, David; Beddar, A. Sam [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States)

2010-10-15

186

NASA Astrophysics Data System (ADS)

This study investigated the dose enhancement due to the presence of mouse bone irradiated by the kilovoltage (kV) photon beams. Dosimetry of the bone associated with soft and lung tissue was determined by Monte Carlo simulations using the EGSnrc-based code in millimeter scale. Two inhomogeneous phantoms with 2 mm of bone layer sandwiched by: (1) water and lung (bone-lung phantom); and (2) water (bone-water phantom), were used. Relative depth doses along the central beam axes in the phantoms and dose enhancement ratios (bone dose in the above inhomogeneous phantoms to the dose at the same point in the water phantom) were determined using the 100 and 225 kVp photon beams. For the 100 kVp photon beams, the depth dose gradient in the bone was significantly larger compared to that in a water phantom without the bone. This is due to the beam hardening effect that some low-energy photons were filtered out in the deeper depth, resulting in less photoelectric interactions and hence energy depositions in the bone. Moreover, dose differences between the top and downstream (bottom) bone edges at depths of 1-5 mm were 168-192% and 149-166% for the bone-lung and bone-water phantom, respectively. These differences were larger than 21-27% (bone-lung) and 12-23% (bone-water) for the 225 kVp photon beams. The maximum dose enhancement ratio in the bone for the bone-lung and bone-water phantoms in various depths was about 5.7 using the 100 kVp photon beams. This ratio was larger than two times of that (2.4) for the 225 kVp photon beams. It is concluded that, apart from the basic beam characteristics such as attenuation and penumbra, which are related to the photon beam energy in the mouse irradiation, the bone dose is another important factor to consider when selecting the beam energy in the small-animal treatment planning, provided that the bone dose enhancement is a concern in the preclinical model.

Chow, James C. L.

2010-05-01

187

. The purpose of this study was to investigate the relative influence of scatter, attenuation, depth-dependent collimator response\\u000a and finite spatial resolution upon the image characteristics in cardiac single-photon emission tomography (SPET). An acquisition\\u000a of an anthropomorphic cardiac phantom was performed together with corresponding SPET Monte Carlo simulations. The cardiac\\u000a phantom and the Monte Carlo simulations were designed so that

Georges N. El Fakhri; Irène Buvat; Mélanie Pélégrini; Habib Benali; Pedro Almeida; Bernard Bendriem; Andrew Todd-Pokropek; Robert Di Paola

1999-01-01

188

The Monte Carlo simulation of the electron transport through air slabs is studied with four codes: PENELOPE, GEANT3, Geant4 and EGSnrc. Monoenergetic electron beams with energies 6, 12 and 18 MeV are considered to impinge on air slabs with thicknesses ranging from 10 to 100 cm. The angular and radial distributions of the transmitted electrons are used to make a

M. Vilches; S. Garcia-Pareja; R. Guerrero; M. Anguiano; A. M. Lallena

2008-01-01

189

The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator?detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ?10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.

Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

2008-01-01

190

Monte Carlo simulations of the particle transport in semiconductor detectors of fast neutrons

NASA Astrophysics Data System (ADS)

Several Monte Carlo all-particle transport codes are under active development around the world. In this paper we focused on the capabilities of the MCNPX code (Monte Carlo N-Particle eXtended) to follow the particle transport in semiconductor detector of fast neutrons. Semiconductor detector based on semi-insulating GaAs was the object of our investigation. As converter material capable to produce charged particles from the (n, p) interaction, a high-density polyethylene (HDPE) was employed. As the source of fast neutrons, the 239Pu-Be neutron source was used in the model. The simulations were performed using the MCNPX code which makes possible to track not only neutrons but also recoiled protons at all interesting energies. Hence, the MCNPX code enables seamless particle transport and no other computer program is needed to process the particle transport. The determination of the optimal thickness of the conversion layer and the minimum thickness of the active region of semiconductor detector as well as the energy spectra simulation were the principal goals of the computer modeling. Theoretical detector responses showed that the best detection efficiency can be achieved for 500 ?m thick HDPE converter layer. The minimum detector active region thickness has been estimated to be about 400 ?m.

Sedla?ková, Katarína; Za?ko, Bohumír; Šagátová, Andrea; Ne?as, Vladimír

2013-05-01

191

A bone composition model for Monte Carlo x-ray transport simulations

In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.

Zhou Hu; Keall, Paul J.; Graves, Edward E. [Department of Radiation Oncology and Department of Molecular Imaging Program at Stanford, Stanford University, Stanford, California 94305 (United States)

2009-03-15

192

A study on 3D Monte Carlo modeling of photon propagation through tissue

Monte Carlo techniques have become popular in modeling of the random events with advantage of powerful computing systems. Especially they have been applied to simulate processes involving random behavior such as diffusion of the Gamma rays through matter and electron concentrations in semiconductors. Recent medical innovations such as Computer Automated Tomography (CAT) and Positron Emission Tomography (PET) are ideal for

M. Kemal Kiymik

1995-01-01

193

Photon migration in upscaled tissue models: measurements and Monte Carlo simulations

In order to investigate the applicability of Monte-Carlo simulations for (Doppler) light scattering in tissue, two upscaled experimental models were constructed. The models consisted of thin layers, either water or gelatin, with scatterers, which can be moved relative to each other. Measurements and simulations of the scattered intensity and the Doppler frequency moments are in rather good agreement.

Frits F. de Mul; M. H. Koelink; M. Kok; Jan Greve; Reindert Graaff; A. C. Dassel; Jan G. Aarnoudse

1993-01-01

194

Photon migration in upscaled tissue models: measurements and Monte Carlo simulations

NASA Astrophysics Data System (ADS)

In order to investigate the applicability of Monte-Carlo simulations for (Doppler) light scattering in tissue, two upscaled experimental models were constructed. The models consisted of thin layers, either water or gelatin, with scatterers, which can be moved relative to each other. Measurements and simulations of the scattered intensity and the Doppler frequency moments are in rather good agreement.

de Mul, Frits F. M.; Koelink, Marco H.; Kok, M.; Greve, Jan; Graaff, Reindert; Dassel, A. C. M.; Aarnoudse, Jan G.

1993-09-01

195

This study evaluated the dosimetric impact of surface dose reduction due to the loss of backscatter from the bone interface in kilovoltage (kV) X-ray radiation therapy. Monte Carlo simulation was carried out using the EGSnrc code. An inhomogeneous phantom containing a thin water layer (0.5-5 mm) on top of a bone (thickness = 1 cm) was irradiated by a clinical 105 kVp photon beam produced by a Gulmay D3225 X-ray machine. Field sizes of 2, 5, and 10 cm diameter and source-to-surface distance of 20 cm were used. Surface doses for different phantom configurations were calculated using the DOSXYZnrc code. Photon energy spectra at the phantom surface and bone were determined according to the phase-space files at the particle scoring planes which included the multiple crossers. For comparison, all Monte Carlo simulations were repeated in a phantom with the bone replaced by water. Surface dose reduction was found when a bone was underneath the water layer. When the water thickness was equal to 1 mm for the circular field of 5 cm diameter, a surface dose reduction of 6.3% was found. The dose reduction decreased to 4.7% and 3.4% when the water thickness increased to 3 and 5 mm, respectively. This shows that the impact of the surface dose uncertainty decreased while the water thickness over the bone increased. This result was supported by the decrease in relative intensity of the lower energy photons in the energy spectrum when the water layer was with and over the bone, compared to without the bone. We concluded that surface dose reduction of 7.8%-1.1% was found when the water thickness increased from 0.5-5 mm for circular fields with diameters ranging from 2-10 cm. This decrease of surface dose results in an overestimation of prescribed dose at the patient's surface, and might be a concern when using kV photon beam to treat skin tumors in sites such as forehead, chest wall, and kneecap. PMID:22955657

Chow, James C L; Owrangi, Amir M

2012-01-01

196

The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.

Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C. [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Lim, T.-S. [Department of Physics, Tunghai University, Taichung 407, Taiwan (China); Fann Wunshain [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Department of Physics, National Taiwan University, Taipei 106, Taiwan (China)

2009-01-05

197

Monte Carlo Simulation Model of Energetic Proton Transport through Self-generated Alfvén Waves

NASA Astrophysics Data System (ADS)

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfvén waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfvén waves—the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfvén waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R.

2013-08-01

198

MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)

2013-08-15

199

Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.

Babich, L. P., E-mail: babich@elph.vniief.ru; Donskoy, E. N.; Kutsyk, I. M. [All-Russian Research Institute of Experimental Physics, Russian Federal Nuclear Center (Russian Federation)

2008-07-15

200

Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

NASA Astrophysics Data System (ADS)

This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

Dureau, David; Poëtte, Gaël

2014-06-01

201

NASA Astrophysics Data System (ADS)

The generation of photocurrent in organic solar cells starts with a photon being absorbed in the active layer and creating an excited electron/hole pair (exciton). The exciton is mobile and dissociates into electron and hole at an interface between donor and acceptor material, unless it decays before it reaches the interface. If they do not recombine, the charge carriers migrate toward the appropriate electrode and contribute to the photocurrent. Thus, the efficiency of organic solar cells depends strongly on the morphology and electronic properties of the donor/acceptor materials. Simulating in detail the processes described above is of interest since it enables the modeling of devices with different architectures and materials properties. Since processes such as exciton absorption, electron hopping, and recombination take place on vastly different time scales, we employ an event-driven Monte Carlo algorithm to simulate a coarse grained lattice model of the active layer of organic solar cells.

Robbiano, Vincent; Luettmer-Strathmann, Jutta

2011-04-01

202

NASA Astrophysics Data System (ADS)

Nuclear heating evaluation by Monte-Carlo simulation requires coupled neutron-photon calculation so as to take into account the contribution of secondary photons. Nuclear data are essential for a good calculation of neutron and photon energy deposition and for secondary photon generation. However, a number of isotopes of the most common nuclear data libraries happen to be affected by energy and/or momentum conservation errors concerning the photon production or inaccurate thresholds for photon emission sections. In this paper, we perform a comprehensive survey of the three evaluations JEFF3.1.1, JEFF3.2T2 (beta version) and ENDF/B-VII.1, over 142 isotopes. The aim of this survey is, on the one hand, to check the existence of photon production data by neutron reaction and, on the other hand, to verify the consistency of these data using the kinematic limits method recently implemented in the TRIPOLI-4 Monte-Carlo code, developed by CEA (Saclay center). Then, the impact of these inconsistencies affecting energy deposition scores has been estimated for two materials using a specific nuclear heating calculation scheme in the context of the OSIRIS Material Testing Reactor (CEA/Saclay).

Péron, A.; Malouch, F.; Zoia, A.; Diop, C. M.

2014-06-01

203

The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258

Habib, B; Poumarede, B; Tola, F; Barthe, J

2010-01-01

204

Single-photon transport and mechanical NOON-state generation in microcavity optomechanics

NASA Astrophysics Data System (ADS)

We investigate the single-photon transport in a single-mode optical fiber coupled to an optomechanical system in the single-photon strong-coupling regime. The single-photon transmission amplitude is analytically obtained with a real-space approach and the effects of cavity and mechanical dissipations are studied via master-equation simulations. Based on the theoretical framework, we further propose a heralded probabilistic scheme to generate mechanical NOON states with arbitrary phonon numbers by measuring the sideband photons. The efficiency and fidelity of the scheme are discussed finally.

Ren, Xue-Xin; Li, Hao-Kun; Yan, Meng-Yuan; Liu, Yong-Chun; Xiao, Yun-Feng; Gong, Qihuang

2013-03-01

205

NASA Astrophysics Data System (ADS)

A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).

Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn

2011-07-01

206

Unified single-photon and single-electron counting statistics: From cavity QED to electron transport

A key ingredient of cavity QED is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultrashort laser pulses allows one to use such a system as a source of single photons--a vital ingredient in quantum information and optical computing schemes. Here we analyze and time-adjust the photon-counting statistics of such a single-photon source and show that the photon statistics can be described by a simple transport-like nonequilibrium model. We then show that there is a one-to-one correspondence of this model to that of nonequilibrium transport of electrons through a double quantum dot nanostructure, unifying the fields of photon-counting statistics and electron-transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron-transport systems (e.g., super-Poissonian shot noise and an extension of the Leggett-Garg inequality) to single-photon-source experiments.

Lambert, Neill [Advanced Science Institute, RIKEN, Saitama 351-0198 (Japan); Chen, Yueh-Nan [Department of Physics and National Center for Theoretical Sciences, National Cheng-Kung University, Tainan 701, Taiwan (China); Nori, Franco [Advanced Science Institute, RIKEN, Saitama 351-0198 (Japan); Physics Department, University of Michigan, Ann Arbor, MI 48109-1040 (United States)

2010-12-15

207

Comparison of generalized transport and Monte-Carlo models of the escape of a minor species

NASA Technical Reports Server (NTRS)

The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.

Demars, H. G.; Barakat, A. R.; Schunk, R. W.

1993-01-01

208

Multi-dimensional impurity transport code by Monte Carlo method including gyro-orbit effects

NASA Astrophysics Data System (ADS)

We are developing a new 3D Monte Carlo transport code 'IMPGYRO' for the analysis of the heavy impurities in fusion edge plasmas. The code directory solves the 3D equations of motion for the test impurity ions to take into account their gyro motion. Most of the important processes, such as the multi-step ionization process and Coulomb scattering, etc., are also included in the model. The results for the prompt redeposition rate of tungsten ions agree well with the analytic results. In addition, 2D density profiles for tungsten ions of each charge state in a simple slab geometry have been calculated for given background plasma profiles typical of a detached plasma state. Although the code is now still under development, these initial results show that the code has potential as a useful tool, not only for the analysis of the prompt redeposition very close to the wall, but also for the analysis of more large scale impurity transport processes.

Hyodo, I.; Hirano, M.; Miyamoto, K.; Hoshino, K.; Hatayama, A.

2003-03-01

209

Monte Carlo simulations of charge transport in organic systems with true off-diagonal disorder.

In this work, a novel method to model off-diagonal disorder in organic materials has been developed. The off-diagonal disorder is taken directly from the geometry of the system, which includes both a distance and an orientational dependence on the constituent molecules, and does not rely on a parametric random distribution. The geometry of the system is generated by running molecular dynamics simulations on phenylene-vinylene oligomers packed into boxes. The effect of the kind of randomness generated in this way is then investigated by means of Monte Carlo simulations of the charge transport in these boxes and a comparison is made to the commonly used model of off-diagonal disorder, where only the distance dependence is accounted for. It is shown that this new refined way of treating the disorder has a significant impact on the charge transport, while still being compliant with previously published and confirmed results. PMID:22998284

Jakobsson, Mattias; Linares, Mathieu; Stafström, Sven

2012-09-21

210

In the present study, a number of brachytherapy sources and activation media were simulated using MCNPX code and the results were analyzed based on the dose enhancement factor values. Furthermore, two new brachytherapy sources (¹³¹Cs and a hypothetical ¹??Tm) were evaluated for their application in photon activation therapy (PAT). ¹²?I, ¹?³Pd, ¹³¹Cs and hypothetical ¹??Tm brachytherapy sources were simulated in water and their dose rate constant and the radial dose functions were compared with previously published data. The sources were then simulated in a soft tissue phantom which was composed of Ag, I, Pt or Au as activation media uniformly distributed in the tumour volume. These simulations were performed using the MCNPX code, and dose enhancement factor (DEF) was obtained for 7, 18 and 30 mg/ml concentrations of the activation media. Each source, activation medium and concentration was evaluated separately in a separate simulation. The calculated dose rate constant and radial dose functions were in agreement with the published data for the aforementioned sources. The maximum DEF was found to be 5.58 for a combination of the ¹??Tm source with 30 mg/ml concentration of I. The DEFs for ¹³¹Cs and ¹??Tm sources for all the four activation media were higher than those for other sources and activation media. From this point of view, these two sources can be more useful in photon activation therapy with photon emitter sources. Furthermore, ¹³¹Cs and ¹??Tm brachytherapy sources can be proposed as new options for use in the field of PAT. PMID:23934379

Bakhshabadi, Mahdi; Ghorbani, Mahdi; Meigooni, Ali Soleimani

2013-09-01

211

The purpose of this work is to revisit the impediments and characteristics of fast Monte Carlo techniques for applications in radiation therapy treatment planning using new methods of utilizing pregenerated electron tracks. The limitations of various techniques for the improvement of speed and accuracy of electron transport have been evaluated. A method is proposed that takes advantage of large available memory in current computer hardware for extensive generation of precalculated data. Primary tracks of electrons are generated in the middle of homogeneous materials (water, air, bone, lung) and with energies between 0.2 and 18 MeV using the EGSnrc code. Secondary electrons are not transported, but their position, energy, charge, and direction are saved and used as a primary particle. Based on medium type and incident electron energy, a track is selected from the precalculated set. The performance of the method is tested in various homogeneous and heterogeneous configurations and the results were generally within 2% compared to EGSnrc but with a 40-60 times speed improvement. In a second stage the authors studied the obstacles for further increased speed-ups in voxel geometries by including ray-tracing and particle fluence information in the pregenerated track information. The latter method leads to speed increases of about a factor of 500 over EGSnrc for voxel-based geometries. In both approaches, no physical calculation is carried out during the runtime phase after the pregenerated data has been stored even in the presence of heterogeneities. The precalculated data are generated for each particular material and this improves the performance of the precalculated Monte Carlo code both in terms of accuracy and speed. Precalculated Monte Carlo codes are accurate, fast, and physics independent and therefore applicable to different radiation types including heavy-charged particles.

Jabbari, Keyvan; Keall, Paul; Seuntjens, Jan [Medical Physics Unit, McGill University Health Center, Montreal, Quebec H3G 1A4 (Canada); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California 94305-5847 (United States); Medical Physics Unit, McGill University Health Center, Montreal, Quebec H3G 1A4 (Canada)

2009-02-15

212

NASA Astrophysics Data System (ADS)

To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.

Müller, Florian; Jenny, Patrick; Daniel, Meyer

2014-05-01

213

Hybrid two-dimensional Monte-Carlo electron transport in self-consistent electromagnetic fields

The physics and numerics of the hybrid electron transport code ANTHEM are described. The need for the hybrid modeling of laser generated electron transport is outlined, and a general overview of the hybrid implementation in ANTHEM is provided. ANTHEM treats the background ions and electrons in a laser target as coupled fluid components moving relative to a fixed Eulerian mesh. The laser converts cold electrons to an additional hot electron component which evolves on the mesh as either a third coupled fluid or as a set of Monte Carlo PIC particles. The fluids and particles move in two-dimensions through electric and magnetic fields calculated via the Implicit Moment method. The hot electrons are coupled to the background thermal electrons by Coulomb drag, and both the hot and cold electrons undergo Rutherford scattering against the ion background. Subtleties of the implicit E- and B-field solutions, the coupled hydrodynamics, and large time step Monte Carlo particle scattering are discussed. Sample applications are presented.

Mason, R.J.; Cranfill, C.W.

1985-01-01

214

A Monte Carlo transport code study of the space radiation environment using FLUKA and ROOT

NASA Astrophysics Data System (ADS)

We report on the progress of a current study aimed at developing a state-of-the-art Monte-Carlo computer simulation of the space radiation environment using advanced computer software techniques recently available at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. By taking the next-generation computer software appearing at CERN and adapting it to known problems in the implementation of space exploration strategies, this research is identifying changes necessary to bring these two advanced technologies together. The radiation transport tool being developed is tailored to the problem of taking measured space radiation fluxes impinging on the geometry of any particular spacecraft or planetary habitat and simulating the evolution of that flux through an accurate model of the spacecraft material. The simulation uses the latest known results in low-energy and high-energy physics. The output is a prediction of the detailed nature of the radiation environment experienced in space as well as the thermal neutron albedo and secondary particle albedo created by the spacecraft material itself. Beyond doing the physics transport of the incident flux using a Monte Carlo code called FLUKA, our software tool will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. The latter is known as ROOT. We will also describe the method for defining spacecraft geometries by utilizing aerospace finite element models (FEMs). .

Wilson, Thomas; Pinsky, Lawrence; Carminati, Federico; Brun, René; Ferrari, Alfredo; Sala, Paola; Empl, A.; MacGibbon, Jane

2001-02-01

215

Neoclassical electron transport calculation by using {delta}f Monte Carlo method

High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on {delta}f Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (T{sub e}(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.

Matsuoka, Seikichi [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); Satake, Shinsuke; Yokoyama, Masayuki [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); National Institute for Fusion Science, Toki 509-5292 (Japan); Wakasa, Arimitsu; Murakami, Sadayoshi [Department of Nuclear Engineering, Kyoto University, Kyoto 606-8501 (Japan)

2011-03-15

216

Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.

Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)

2011-04-15

217

3D electro-thermal Monte Carlo study of transport in confined silicon devices

NASA Astrophysics Data System (ADS)

The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non-equilibrium carrier dynamics, a key determinant in device performance. The dependence of short-channel effects and Joule heating on the lateral scaling of the cross-section is specifically explored in this work. Finally, this dissertation studies the basic tradeoff between various n-channel multigate architectures with square cross-sectional lengths ranging from 30 nm to 5 nm are presented.

Mohamed, Mohamed Y.

218

NASA Astrophysics Data System (ADS)

This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers.

Chauvet, Yves

1985-07-01

219

A Monte Carlo based radiotherapy simulator

This paper presents a Monte Carlo based simulator of the radiotherapy treatment chain. The high energy simulation module (HESM) incorporates components for beam generation, irradiation set up, radiation transport modeling and dose distribution calculation. The beam is defined by means of particle charge, energy, direction and position. Comprehensive modeling of photon and electron interactions in the radiotherapy energy range has

K. Bliznakova; Z. Kolitsi; N. Pallikarakis

2004-01-01

220

Mechanisms of energy transport during ultrashort laser pulses (USLPs) ablation are investigated in this paper. Nonequilibrium electron-transport, material ionization, as well as density change effects, are studied using atomistic models--the molecular dynamics (MD) and Monte Carlo (MC) methods, in addition to the previously studied laser absorption, heat conduction, and stress wave propagation. The target material is treated as consisting of two subsystems: valence-electron system and lattice system. MD method is applied to analyze the motion of atoms while MC method is applied for simulating electron dynamics and multiscattering events between particles. Early-time laser-energy absorption and redistribution as well as later-time material ablation and expansion processes are analyzed. This model is validated in terms of ablation depth, lattice/electron temperature distribution as well as evolution, and plume front velocity, through comparisons with experimental or theoretical results in literature. It is generally believed that the hydrodynamic motion of the ablated material is negligible for USLP but this study shows it is true only for its effect on laser-energy deposition. This study shows that the consideration of hydrodynamic expansion and fast density change in both electron and lattice systems is important for obtaining a reliable energy transport mechanism in the locally heated zone.

Hu Wenqian; Shin, Yung C.; King, Galen [School of Mechanical Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

2010-09-01

221

Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport

NASA Astrophysics Data System (ADS)

Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.

Manalo, K.; Chin, M.; Sjoden, G.

2014-06-01

222

MCNPX Monte Carlo simulations of particle transport in SiC semiconductor detectors of fast neutrons

NASA Astrophysics Data System (ADS)

The aim of this paper was to investigate particle transport properties of a fast neutron detector based on silicon carbide. MCNPX (Monte Carlo N-Particle eXtended) code was used in our study because it allows seamless particle transport, thus not only interacting neutrons can be inspected but also secondary particles can be banked for subsequent transport. Modelling of the fast-neutron response of a SiC detector was carried out for fast neutrons produced by 239Pu-Be source with the mean energy of about 4.3 MeV. Using the MCNPX code, the following quantities have been calculated: secondary particle flux densities, reaction rates of elastic/inelastic scattering and other nuclear reactions, distribution of residual ions, deposited energy and energy distribution of pulses. The values of reaction rates calculated for different types of reactions and resulting energy deposition values showed that the incident neutrons transfer part of the carried energy predominantly via elastic scattering on silicon and carbon atoms. Other fast-neutron induced reactions include inelastic scattering and nuclear reactions followed by production of ?-particles and protons. Silicon and carbon recoil atoms, ?-particles and protons are charged particles which contribute to the detector response. It was demonstrated that although the bare SiC material can register fast neutrons directly, its detection efficiency can be enlarged if it is covered by an appropriate conversion layer. Comparison of the simulation results with experimental data was successfully accomplished.

Sedla?ková, K.; Zat'ko, B.; Šagátová, A.; Pavlovi?, M.; Ne?as, V.; Stacho, M.

2014-05-01

223

NASA Astrophysics Data System (ADS)

MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.

Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald

2011-06-01

224

NASA Astrophysics Data System (ADS)

We implemented a Markov Chain Monte Carlo (MCMC) technique within the USINE propagation package to estimate the probability-density functions for cosmic-ray transport and source parameters within an 1D diffusion model. From the measurement of the B/C and 3He/4He ratios as well as of radioactive cosmic-ray clocks, we calculate their probability density functions, with a special emphasis on the halo size L of the Galaxy and the local underdense bubble of size r_h. We also derive the mean, best-fit model parameters and 68% confidence intervals for the various parameters, as well as the envelopes of isotopic ratios. Additionally, we verify the compatibility of the primary fluxes with the transport parameters derived from the B/C analysis before deriving the source parameters. Finally, we investigate the impact of the input ingredients of the propagation model on the best-fitting values of the transport parameters (e.g., the fragmentation cross sections) in order to estimate the importance of the systematic uncertainties. We conclude that the size of the diffusive halo depends on the presence/absence of the local underdensity damping effect on radioactive nuclei. Moreover, we find that models based on fitting B/C are compatible with primary fluxes. The different spectral indices obtained for the propagated primary fluxes up to a few TeV/n can be naturally ascribed to transport effects only, implying universality of elemental source spectra. Finally, we emphasise that the systematic uncertainties found for the transport parameters are larger than the statistical ones, rendering a phenomenological interpretation of the current data difficult.

Putze, A.; Coste, B.; Derome, L.; Donato, F.; Maurin, D.

225

Monte Carlo simulations of phonon transport in nanoporous silicon and germanium

NASA Astrophysics Data System (ADS)

Heat conduction of nanoporous silicon and germanium thin films is studied thanks to a statistical approach. Resolution of phonon Boltzmann transport equation is performed with a Monte Carlo technique in order to assess thermal conductivity. Sensitivity of this latter property with respect to parameters such as phonon mean free path and characteristics of the pores (distribution, size, porosity) is discussed and compared to predictions from analytical models. Results point out that thermal properties might be tailored through the design of the porosity and more specifically by the adjustment of the phonon-pore mean free path. Finally, an effective medium technique is used to extend our work to multilayered crystalline-nanoporous structures. Results show that ought to pore scattering, a diffusive Fourier regime can be recovered even when the film thickness is below the bulk limit.

Jean, V.; Fumeron, S.; Termentzidis, K.; Tutashkonko, S.; Lacroix, D.

2014-01-01

226

Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

2010-01-01

227

Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.

Wang, Lilie L. W.; Beddar, Sam [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

2011-03-15

228

Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10×10 down to 0.5×0.5 cm2 for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5×0.5 cm2 photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs’ responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10×10 to 0.5×0.5 cm2, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%–2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.

Wang, Lilie L. W.; Beddar, Sam

2011-01-01

229

We have developed a "red blood cell (RBC)-photon simulator" to reveal optical propagation in prethrombus blood for various levels of RBC density and aggregation. The simulator investigates optical propagation in the prethrombus blood and will be applied to detect it noninvasively for thrombosis prevention in an earlier stage. In our simulator, Lambert-Beer's law is employed to simulate the absorption of RBCs with hemoglobin, while the Monte Carlo method is applied to simulate scattering through iterative calculations. One advantage of our simulator is that concentrations and distributions of RBCs can be arbitrarily chosen to exhibit the prethrombus, while conventional models cannot. Using the simulator, we found that various levels of RBC density and aggregation have different effects on the optical propagation of near-infrared response light in blood. The same different effects were acquired in in vitro experiments with 12 bovine blood samples, which were performed to evaluate the simulator. We measured RBC density using the clinical hematocrit index and RBC aggregation using activated whole blood clotting time. The experimental results correspond to the simulator results well. Therefore, we could show that our simulator exhibits the correct optical propagation for prethrombus blood and is applicable for the prethrombus detection using multiple detectors. PMID:21342854

Oshima, Shiori; Sankai, Yoshiyuki

2011-05-01

230

NASA Astrophysics Data System (ADS)

Gel dosimetry has proved to be useful to determine absorbed dose distributions in radiotherapy, as well as to validate treatment plans. Gel dosimetry allows dose imaging and is particularly helpful for non-uniform dose distribution measurements, as may occur when multiple-field irradiation techniques are employed. In this work, we report gel-dosimetry measurements and Monte Carlo (PENELOPE ®) calculations for the dose distribution inside a tissue-equivalent phantom exposed to a typical multiple-field irradiation. Irradiations were performed with a 10 MV photon beam from a Varian ® Clinac 18 accelerator. The employed dosimeters consisted of layers of Fricke Xylenol Orange radiochromic gel. The method for absorbed dose imaging was based on analysis of visible light transmittance, usually detected by means of a CCD camera. With the aim of finding a simple method for light transmittance image acquisition, a commercial flatbed-like scanner was employed. The experimental and simulated dose distributions have been compared with those calculated with a commercially available treatment planning system, showing a reasonable agreement.

Valente, M.; Aon, E.; Brunetto, M.; Castellano, G.; Gallivanone, F.; Gambarini, G.

2007-09-01

231

This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle(3) treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (d(max)) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose. PMID:22642985

Chow, James C L; Jiang, Runqing

2012-06-21

232

NASA Astrophysics Data System (ADS)

Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

2014-06-01

233

NASA Astrophysics Data System (ADS)

Assessment of parametric uncertainty for groundwater reactive transport models is challenging, because the models are highly nonlinear with respect to their parameters due to nonlinear reaction equations and process coupling. The nonlinearity may yield parameter distributions that are non-Gaussian and have multiple modes. For such parameter distributions, the widely used nonlinear regression methods may not be able to accurately quantify predictive uncertainty. One solution to this problem is to use Markov Chain Monte Carlo (MCMC) techniques. Both the nonlinear regression and MCMC methods are used in this study for quantification of parametric uncertainty of a surface complexation model (SCM), developed to simulate hexavalent uranium [U(VI)] transport in column experiments. Firstly, a brute force Monte Carlo (MC) simulation with hundreds of thousands of model executions is conducted to understand the surface of objective function and predictive uncertainty of uranium concentration. Subsequently, the Gauss-Marquardt-Levenberg method is applied to calibrate the model. It shows that, even with multiple initial guesses, the local optimization method has difficulty of finding the global optimum because of the rough surface of the objective function and local optima/minima due to model nonlinearity. Another problem of the nonlinear regression is the underestimation of predictive uncertainty, as both the linear and nonlinear confidence intervals are narrower than that obtained from the native MC simulation. Since the naïve MC simulation is computationally expensive, the above challenges for parameter estimation and predictive uncertainty analysis are addressed using a computationally efficient MCMC technique, the DiffeRential Evolution Adaptive Metropolis algorithm (DREAM) algorithm. The results obtained from running DREAM compared with those from brute force Monte Carlo simulations shown that MCMC not only successfully infers the multi-modals posterior probability distribution, but also can provide good estimates of predictive uncertainty. The reason for the poor performance of the nonlinear regression methods is that Gaussian marginal distributions assumed in the nonlinear regression deviate significantly from the marginal posterior probability distributions estimated by DREAM and the brute force MC simulations.

Shi, X.; Ye, M.; Curtis, G. P.; Lu, D.; Meyer, P. D.; Yabusaki, S.; Wu, J.

2011-12-01

234

The Monte Carlo technique is applied to simulate the processes of the cascade relaxation of gaseous boron at atomic density of 2.5 × 1022 m?3 ionized by photons with the energies of 0.7–25 Ryd passing through a cylindrical interaction zone along its axis. The trajectories of electrons are simulated based on photoionization and electron-impact ionization cross sections calculated in the

S Brühl; A G Kochur

2012-01-01

235

Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism

G M Greenman; M J OBrien; R J Procassini; K I Joy

2009-01-01

236

The BNM-LNHB (formerly BNM-LPRI, the French national standard laboratory for ionizing radiation) is equipped with a SATURNE 43 linear accelerator (GE Medical Systems) dedicated to establishing national references of absorbed dose to water for high-energy photon and electron beams. These standards are derived from a dose measurement with a graphite calorimeter and a transfer procedure to water using Fricke dosimeters. This method has already been used to obtain the reference of absorbed dose to water for cobalt-60 beams. The correction factors rising from the perturbations generated by the dosimeters were determined by Monte Carlo calculations. To meet these applications, the Monte Carlo code PENELOPE was used and user codes were specially developed. The first step consisted of simulating the electron and photon showers produced by primary electrons within the accelerator head to determine the characteristics of the resulting photon beams and absorbed dose distributions in a water phantom. These preliminary computations were described in a previous paper. The second step, described in this paper, deals with the calculation of the perturbation correction factors of the graphite calorimeter and of Fricke dosimeters. To point out possible systematic biases, these correction factors were calculated with another Monte Carlo code, EGS4, widely used for years in the field of dose metrology applications. Comparison of the results showed no significant bias. When they were possible, experimental verifications confirmed the calculated values. PMID:11419629

Mazurier, J; Gouriou, J; Chauvenet, B; Barthe, J

2001-06-01

237

A new Monte Carlo program for simulating light transport through Port Wine Stain skin.

A new Monte Carlo program is presented for simulating light transport through clinically normal skin and skin containing Port Wine Stain (PWS) vessels. The program consists of an eight-layer mathematical skin model constructed from optical coefficients described previously. A simulation including diffuse illumination at the surface and subsequent light transport through the model is carried out using a radiative transfer theory ray-tracing technique. Total reflectance values over 39 wavelengths are scored by the addition of simulated light returning to the surface within a specified region and surface reflections (calculated using Fresnel's equations). These reflectance values are compared to measurements from individual participants, and characteristics of the model are adjusted until adequate agreement is produced between simulated and measured skin reflectance curves. The absorption and scattering coefficients of the epidermis are adjusted through changes in the simulated concentrations and mean diameters of epidermal melanosomes to reproduce non-lesional skin colour. Pseudo-cylindrical horizontal vessels are added to the skin model, and their simulated mean depths, diameters and number densities are adjusted to reproduce measured PWS skin colour. Accurate reproductions of colour measurement data are produced by the program, resulting in realistic predictions of melanin and PWS blood vessel parameters. Using a modest personal computer, the simulation currently requires an average of five and a half days to complete. PMID:24142045

Lister, T; Wright, P A; Chappell, P H

2014-05-01

238

NASA Astrophysics Data System (ADS)

Interface roughness strongly influences the performance of germanium metal-organic-semiconductor field effect transistors (MOSFETs). In this paper, a 2D full-band Monte Carlo simulator is used to study the impact of interface roughness scattering on electron and hole transport properties in long- and short- channel Ge MOSFETs inversion layers. The carrier effective mobility in the channel of Ge MOSFETs and the in non-equilibrium transport properties are investigated. Results show that both electron and hole mobility are strongly influenced by interface roughness scattering. The output curves for 50 nm channel-length double gate n and p Ge MOSFET show that the drive currents of n- and p-Ge MOSFETs have significant improvement compared with that of Si n- and p-MOSFETs with smooth interface between channel and gate dielectric. The 82% and 96% drive current enhancement are obtained for the n- and p-MOSFETs with the completely smooth interface. However, the enhancement decreases sharply with the increase of interface roughness. With the very rough interface, the drive currents of Ge MOSFETs are even less than that of Si MOSFETs. Moreover, the significant velocity overshoot also has been found in Ge MOSFETs.

Du, Gang; Liu, Xiao-Yan; Xia, Zhi-Liang; Yang, Jing-Feng; Han, Ru-Qi

2010-05-01

239

Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the Monte Carlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these Monte Carlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the Monte Carlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these Monte Carlo algorithms has been investigated.

Brantley, P S

2009-06-30

240

The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously

Indrin J. Chetty; Bruce Curran; Joanna E. Cygler; John J. Demarco; Gary Ezzell; Bruce A. Faddegon; Iwan Kawrakow; Paul J. Keall; Helen Liu; C.-M. Charlie Ma; D. W. O. Rogers; Jan Seuntjens; Daryoush Sheikh-Bagheri; Jeffrey V. Siebers

2007-01-01

241

Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water

NASA Astrophysics Data System (ADS)

A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.

Papamichael, Georgios Ioannis

242

NASA Astrophysics Data System (ADS)

Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

Romano, Paul Kollath

243

Simulation of photon transport in a three-dimensional leaf: implications for photosynthesis

A model to evaluate photon transport within leaves and the implications for photosynthesis are investigated. A ray trac- ing model, Raytran, was used to produce absorption pro- files within a virtual dorsiventral plant leaf oriented in two positions (horizontal\\/vertical) and illuminated on one of its two faces (adaxial\\/abaxial). Together with chlorophyll pro- files, these absorption profiles feed a simple photosynthesis

S. L. Ustin; S. Jacquemoud; Y. Govaerts

2001-01-01

244

Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342

Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney

2014-06-01

245

This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random

I SPECT; Yuni K. Dewaraja; Michael Ljungberg; Amitava Majumdar; Abhijit Bose; Kenneth F. Koral

2002-01-01

246

A sophisticated simulation package has been developed permitting full tomographic acquisition of nuclear medicine data from physically realistic, non-uniform and asymmetric source and scattering objects. The simulation package is based on MCNP (Monte Carlo for neutron-photon transport), a Monte Carlo code developed at the Los Alamos Scientific Laboratory. The MCNP code has been extensively modified with features that allow direct

J. C. Yanch; A. B. Dobrzeniecki; C. Ramanathan; R. Behrman

1992-01-01

247

Monte Carlo simulation of gas Cerenkov detectors

Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier.

Mack, J.M.; Jain, M.; Jordan, T.M.

1984-01-01

248

Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind

NASA Astrophysics Data System (ADS)

In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.

Barakat, A. R.; Ji, J.; Schunk, R. W.

2013-12-01

249

penMesh--Monte Carlo radiation transport simulation in a triangle mesh geometry.

We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh. PMID:19435677

Badal, Andreu; Kyprianou, Iacovos; Banh, Diem Phuc; Badano, Aldo; Sempau, Josep

2009-12-01

250

In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)

2012-07-01

251

Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle

NASA Astrophysics Data System (ADS)

A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).

de Bellefon, G. M.; Wirth, B. D.

2011-06-01

252

Monte Carlo model of neutral-particle transport in diverted plasmas

The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

1981-11-01

253

Monte Carlo simulation of negative ion transport in the negative ion source (Camembert III)

NASA Astrophysics Data System (ADS)

Transport process of negative hydrogen ions (H-) in a large hybrid multicusp H- source, ``Camembert III,'' has been analyzed by a three-dimensional Monte Carlo simulation code. The realistic geometry and multicusp magnetic-field configuration are taken into account. Various important destruction processes of H- and Coulomb collision with background plasma are also included in the model. Both the volume- and surface-produced H- ion trajectories are followed. For volume-produced H- ions, most of the H- ions can reach the wall in the low-pressure case (1 mTorr), while in the high-pressure case (3 mTorr) most of the H- ions are destructed by volume loss reactions before reaching the wall. This shows that the wall loss is significant at low pressure as in the experiments. For surface-produced H- ions, the influence of its birthplace on the H- current is studied. Negative ions created on the sidewall hardly can reach the center of the source due to trapping by the multicusp magnetic field. As a result, H- ions created on the sidewall do not have a significant effect on the H- current.

Sakurabayashi, T.; Hatayama, A.; Miyamoto, K.; Ogasawara, M.; Bacal, M.

2002-02-01

254

Non-unitary Quantum Monte Carlo method for transport of atomic states through solids

NASA Astrophysics Data System (ADS)

We present a new quantum trajectory Monte Carlo (QTMC) method describing the time development of the internal state of fast highly charged ions subject to collisions and to spontaneous radiative decay during transport through solids. Our method describes both the buildup of coherences and the decoherence of the open quantum system due to the interaction with its environment. The dynamics of the reduced density matrix is governed by a Lindblad master equation that can be solved in terms of QTMC sampling [1]. For systems involving a high-dimensional Hilbert space the QTMC method is advantageous in terms of computer storage compared to a direct solution of the underlying Lindblad master equation. In practice, however, the standard Lindblad equation can be of limited value because it describes strictly unitary time transformations of the reduced density matrix. We have developed a generalized non-unitary Lindblad form (and its QTMC implementation) for atomic systems in which only finite subspaces can be represented within any realistic basis size and the coupling to the complement cannot be neglected. [1] T. Minami, et. al., Phys. Rev. A 67, 022902 (2003).

Seliger, Marek; Minami, Tatsuya; Reinhold, Carlos O.; Burgdorfer, Joachim

2004-05-01

255

Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems. PMID:22612146

Sakota, Daisuke; Takatani, Setsuo

2012-05-01

256

National Technical Information Service (NTIS)

Two photon monitors have been designed and installed in the positron accumulator ring (PAR) of the Advanced Photon Source. The photon monitors characterize the beam's transverse profile, bunch length, emittance, and energy spread in a nonintrusive manner....

W. Berg, B. Yang, A. Lumpkin, J. Jones

1996-01-01

257

Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 × 1 cm(2)) with gantry angles equal to 0°, 90°, and 180°, and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0°. The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180°, but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized. PMID:21993201

Chow, James C L; Owrangi, Amir M

2012-01-01

258

NASA Astrophysics Data System (ADS)

Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated with other Monte Carlo codes is carried out. A ?2-like statistical test is proposed for these comparisons. PENELOPE and GEANT4 show a reasonable agreement for all energies analyzed and distances to the source larger than 1 cm. Significant differences are found at distances from the source up to 1 cm. A similar situation occurs between PENELOPE and EGS4.

Almansa, Julio F.; Guerrero, Rafael; Al-Dweri, Feras M. O.; Anguiano, Marta; Lallena, Antonio M.

2007-05-01

259

Monte Carlo (MC) simulations and an analytical theory are presented to describe electronic excitation transport (EET) among static chromophores constrained to lie on the surfaces of spherical micelles. Both donor--trap (DT) and donor--donor (DD) EET are examined for two types of systems: probe molecules on the surfaces of isolated (low concentration) micelles, and probes on the surfaces of interacting (concentrated) micelles. The EET dynamics are described by the function, [l angle][ital G][sup [ital s

Finger, K.U.; Marcus, A.H.; Fayer, M.D. (Department of Chemistry, Stanford University, Stanford, California 94305 (United States))

1994-01-01

260

Acceleration of PET Monte Carlo simulation using the graphics hardware ray-tracing engine

GRAY (High Energy Photon Ray Tracer) is a Monte-Carlo ray-driven high energy photon transport engine for PET and SPECT applications that supports complex mesh based primitives for source distributions, phantom shapes, and detector geometries. Ray tracing is a technique used in computer graphics to render scenes with realistic light properties. We adapted this technique to accelerate solving the intersection test

Zhiguang Wang; Peter. D. Olcott; Craig S. Levin

2010-01-01

261

Accelerated Monte Carlo based dose calculations for brachytherapy planning using correlated sampling

Current brachytherapy dose calculations ignore applicator attenuation and tissue heterogeneities, assuming isolated sources embedded in unbounded medium. Conventional Monte Carlo (MC) dose calculations, while accurate, are too slow for practical treatment planning. This study evaluates the efficacy of correlated sampling in reducing the variance of MC photon transport simulation in typical brachytherapy geometries. Photon histories were constructed in the homogeneous

Håkan Hedtjärn; Gudrun Alm Carlsson; Jeffrey F. Williamson

2002-01-01

262

It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results.

Procassini, R J; Beck, B R

2004-12-07

263

NASA Astrophysics Data System (ADS)

Multimotor transport is studied by Monte-Carlo simulation with consideration of motor detachment from the filament. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor numbers depending on the single motor force-velocity curve. The stall force and run-length reduced greatly compared to other models. Especially in the case of low ATP concentrations, the stall force of multi motor transport even smaller than the single motor's stall force.

Wang, Zi-Qing; Wang, Guo-Dong; Shen, Wei-Bo

2010-10-01

264

ITS Version 4.0: Electron/photon Monte Carlo transport codes

The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.

Halbleib, J.A,; Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

1995-07-01

265

A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)

Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States)] [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States)] [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)] [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2013-07-01

266

NASA Astrophysics Data System (ADS)

Two multidimensional Monte Carlo simulation codes-(a) neutral (H2,H) transport code and (b) negative ion (H-) transport code-have been developed. This article focuses on the recent simulation results by the neutral transport code for the H- production in a large, hybrid negative ion source, ``Camembert III.'' Two-dimensional spatial profiles of vibrationally excited molecules H2(v) and H- production are obtained for a given background plasma profile. Both H2(v) and H- ions are mainly produced near the filaments in the driver region. However, the H- source density has double peak in its spatial structure, while the density profile of H2(v) is characterized by the ``mushroom'' structure with a single peak. These results indicate a large potential of the neutral transport code, not only for the understanding of underlying physics, but also for designing ion sources, including complicating effects of geometry, spatial and velocity distribution of particles, and atomic and wall processes.

Hatayama, A.; Sakurabayashi, T.; Ishi, Y.; Makino, K.; Ogasawara, M.; Bacal, M.

2002-02-01

267

NASA Astrophysics Data System (ADS)

Conventional formulations of changes in cosmogenic nuclide production rates with snow cover are based on a mass-shielding approach, which neglects the role of neutron moderation by hydrogen. This approach can produce erroneous correction factors and add to the uncertainty of the calculated cosmogenic exposure ages. We use a Monte Carlo particle transport model to simulate fluxes of secondary cosmic-ray neutrons near the surface of the Earth and vary surface snow depth to show changes in neutron fluxes above rock or soil surface. To correspond with shielding factors for spallation and low-energy neutron capture, neutron fluxes are partitioned into high-energy, epithermal and thermal components. The results suggest that high-energy neutrons are attenuated by snow cover at a significantly higher rate (shorter attenuation length) than indicated by the commonly-used mass-shielding formulation. As thermal and epithermal neutrons derive from the moderation of high-energy neutrons, the presence of a strong moderator such as hydrogen in snow increases the thermal neutron flux both within the snow layer and above it. This means that low-energy production rates are affected by snow cover in a manner inconsistent with the mass-shielding approach and those formulations cannot be used to compute snow correction factors for nuclides produced by thermal neutrons. Additionally, as above-ground low-energy neutron fluxes vary with snow cover as a result of reduced diffusion from the ground, low-energy neutron fluxes are affected by snow even if the snow is at some distance from the site where measurements are made.

Zweck, Christopher; Zreda, Marek; Desilets, Darin

2013-10-01

268

Update on the Status of the FLUKA Monte Carlo Transport Code

NASA Technical Reports Server (NTRS)

The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.

Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.

2004-01-01

269

NASA Astrophysics Data System (ADS)

Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm3 crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface.

Roncali, Emilie; Schmall, Jeffrey P.; Viswanath, Varsha; Berg, Eric; Cherry, Simon R.

2014-04-01

270

Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727

Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R

2014-04-21

271

The accuracy of Single-Photon Emission Computed Tomography images is degraded by physical effects, namely photon attenuation, Compton scatter and spatially varying collimator response. The 3D nature of these effects is usually neglected by the methods used to correct for these effects. To deal with the 3D nature of the problem, a 3D projector modeling the spread of photons in 3D

Delphine Lazaro; Vincent Breton; Irène Buvat

2004-01-01

272

NASA Astrophysics Data System (ADS)

The Monte Carlo technique is applied to simulate the processes of the cascade relaxation of gaseous boron at atomic density of 2.5 × 1022 m-3 ionized by photons with the energies of 0.7-25 Ryd passing through a cylindrical interaction zone along its axis. The trajectories of electrons are simulated based on photoionization and electron-impact ionization cross sections calculated in the one-electron configuration-average Pauli-Fock approximation. Numbers of electrons and photons leaving the interaction zone per one initial photoionization, their energy spectra, the energy transferred to the medium and the probabilities of final ion formations are shown to change noticeably as the incident photon energy is scanned through boron atom ionization thresholds. These variations can be explained only if secondary electron-impact-produced processes are considered. The density of secondary events decreases when going from the zone axis to its border, and the profiles of the density along the radial direction are found to be similar for all the initial exciting photon energies.

Brühl, S.; Kochur, A. G.

2012-07-01

273

The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.

Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V. [University of Michigan, Ann Arbor, Michigan 48109 and University of Nebraska Medical Center, Omaha, Nebraska 68198-7521 (United States) and University of Michigan, Ann Arbor, Michigan 48109 (United States) and Ottawa Hospital Regional Cancer Center, Ottawa, Ontario K1H 1C4 (Canada); University of California, Los Angeles, Callifornia 90095 (United States) and Mayo Clinic Scottsdale, Scottsdale, Arizona 85259 (United States) and University of California, San Francisco, California 94143 (United States); National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Stanford University Cancer Center, Stanford, California 94305-5847 (United States); University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States); Carleton University, Ottawa, Ontario K1S 5B6 (Canada); McGill University, Montreal, Quebec H3G 1A4 (Canada); Regional Cancer Center, Erie, Pennsylvania 16505 (United States); Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

2007-12-15

274

Infrared Photon Stimulated Hydrogen Transport in Rutile TiO2

NASA Astrophysics Data System (ADS)

Measurements of the O-H and O-D vibrational lifetimes show that the room temperature proton diffusion rate in TiO2 can be enhanced by 9 orders of magnitude when stimulated by resonant infrared photons. We find that the local oscillatory motion of the proton quickly couples to a wag-mode-assisted classical transfer process along the c-channel with a jump rate of >1 THz and a barrier height of 0.3 eV. Such an increase in proton transport rate at moderate temperatures is significant for renewable energy applications ranging from hydrogen transport membranes to water splitting by photocatalysis.

Spahr, Erik; Luepke, Gunter; Wen, Lanlin; Stavola, Michael; Boatner, Lynn; Feldman, Leonard; Tolk, Norman

2010-03-01

275

NASA Astrophysics Data System (ADS)

We extend the input-output formalism of quantum optics to analyze few-photon transport in waveguides with an embedded qubit. We provide explicit analytical derivations for one- and two-photon scattering matrix elements based on operator equations in the Heisenberg picture.

Fan, Shanhui; Kocaba?, ?ükrü Ekin; Shen, Jung-Tsung

2010-12-01

276

We extend the input-output formalism of quantum optics to analyze few-photon transport in waveguides with an embedded qubit. We provide explicit analytical derivations for one- and two-photon scattering matrix elements based on operator equations in the Heisenberg picture.

Fan Shanhui; Kocabas, Suekrue Ekin [Ginzton Laboratory, Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Shen, Jung-Tsung [Department of Electrical and Systems Engineering, Washington University, St. Louis, Missouri 63130 (United States)

2010-12-15

277

This paper reports the implementation of the SIMIND Monte Carlo code on a IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. The authors' parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel

Y. K. Dewaraja; M. Ljungberg; A. Majumdar; A. Bose; K. F. Koral

2000-01-01

278

This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random

Yuni K. Dewaraja; Michael Ljungberg; Amitava Majumdar; Abhijit Bose; Kenneth F. Koral

2002-01-01

279

NSDL National Science Digital Library

In this activity using an open space and a thick rope, students simulate the movement of photons from the Sun. The resource is part of the teacher's guide accompanying the video, NASA Why Files: The Case of the Mysterious Red Light. Lesson objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide.

280

Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch

2013-10-01

281

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley-Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

Müller, Florian; Jenny, Patrick; Meyer, Daniel W.

2013-10-01

282

NASA Astrophysics Data System (ADS)

We investigate the two-photon transport through a waveguide side coupling to a whispering-gallery-atom system. Using the Lehmann-Symanzik-Zimmermann reduction approach, we present the general formula for the two-photon processes including the two-photon scattering matrices, the wave functions, and the second order correlation functions of the outgoing photons. Based on the exact results of the second order correlation functions, we analyze the quantum statistics behaviors of the outgoing photons for two different cases: (a) the ideal case without the intermodal coupling in the whispering-gallery resonator; and (b) the case in the presence of the intermodal coupling which leads to more complex nonlinear behavior. In the ideal case, we show that the system consists of two independent scattering pathways, a free pathway by a cavity mode without atomic excitation, and a “Jaynes-Cummings” pathway described by the Jaynes-Cummings Hamiltonian of a single-mode cavity coupling to an atom. The presence of the free pathway leads to two-photon correlation properties that are distinctively different from the standard Jaynes-Cummings model, in both the strong and weak-coupling regime. In the presence of intermodal mixing, the system no longer exhibits a free resonant pathway. Instead, both the single-photon and the two-photon transport properties depend on the position of the atom. Thus, in the presence of intermodal mixing, one can in fact tune the photon correlation properties by changing the position of the atom. Our formalism can be used to treat resonator and cavity dissipation as well.

Shi, T.; Fan, Shanhui

2013-06-01

283

NASA Astrophysics Data System (ADS)

Photon-counting detectors with energy-discrimination capabilities are able to reduce radiation dose and suppress noise compared with conventional detectors for X-ray imaging. These detectors are suitable for spectral X-ray imaging because they can measure the energy of each photon and provide spectral information. One potential application of photon-counting detectors with energy-discrimination capabilities is the quantification of breast composition by using dual-energy techniques. In this study, we implemented quantitative breast imaging with dual-energy techniques by using Monte Carlo simulations. An X-ray imaging system was simulated with a photon-counting detector based on cadmium zinc telluride and a micro-focus X-ray tube. In order to decompose three materials with two spectral measurements, we applied an additional constraint that the sum of the volumes of each material be equivalent to the volume of the mixture. Inverse fitting functions with the least-squares estimation were used to obtain fitting coefficients and calculate volume fractions for each material. The results showed that the degree of decomposition for the composition included in the mixtures varied with the type of composition and the inverse fitting function. High-order fitting functions increased the quantitative accuracy, but the uncertainty of the decomposed images was increased for high-order fitting functions. This study demonstrates that it is possible to quantify breast composition by using dual-energy techniques and photon-counting detectors without an additional exposure and that the decomposed images should be evaluated by considering both their uncertainties and quantitative accuracies.

Lee, Seungwan; Choi, Yu-Na; Kim, Hee-Joung

2014-01-01

284

Purpose: The purpose of this work was to evaluate the absorbed dose to Al{sub 2}O{sub 3} dosimeter at various depths of water phantom in radiotherapy photon beams by Monte Carlo simulation and evaluate the beam quality dependence. Methods: The simulations were done using EGSnrc. The cylindrical Al{sub 2}O{sub 3} dosimeter ({Phi}4 mmx1 mm) was placed at the central axis of the water phantom ({Phi}16 cmx16 cm) at depths between 0.5 and 8 cm. The incident beams included monoenergetic photon beams ranging from 1 to 18 MeV, {sup 60}Co {gamma} beams, Varian 6 MV beams using phase space files based on a full simulation of the linac, and Varian beams between 4 and 24 MV using Mohan's spectra. The absorbed dose to the dosimeter and the water at the corresponding position in the absence of the dosimeter, as well as absorbed dose ratio factor f{sub md}, was calculated. Results: The results show that f{sub md} depends obviously on the photon energy at the shallow depths. However, as the depth increases, the change in f{sub md} becomes small, beyond the buildup region, the maximum discrepancy of f{sub md} to the average value is not more than 1%. Conclusions: These simulation results confirm the use of Al{sub 2}O{sub 3} dosimeter in radiotherapy photon beams and clearly indicate that more attention should be paid when using such a dosimeter in the buildup region of high-energy radiotherapy photon beams.

Chen Shaowen; Wang Xuetao; Chen Lixin; Tang Qiang; Liu Xiaowei [School of Physics Science and Engineering, Sun Yat-Sen University, Guangzhou 510275 (China) and School of Electron Engineering, Dongguan University of Technology, Dongguan 523808 (China); Guangdong Province Hospital of TCM, Guangzhou 510120 (China); Cancer Center of Sun Yat-Sen University, Guangzhou 510060 (China); School of Physics Science and Engineering, Sun Yat-Sen University, Guangzhou 510275 (China)

2009-10-15

285

Implementation of linearly-polarized photon scattering into the EGS4 code

A modification to the general-purpose Monte Carlo electron-photon transport code EGS4 [Nelson, Hirayama and Rogers, SLAC-265] was made in order to include linear polarization in the simulation of photon scattering. Both the Compton and Rayleigh scattering routines were modified to properly account for the electric-field vector of the photon. This vector is calculated at each photon scatter, and is passed

Y. Namito; S. Ban; H. Hirayama

1993-01-01

286

Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads

NASA Astrophysics Data System (ADS)

Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).

Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson

2012-02-01

287

Monte Carlo transport calculations and analysis for reactor pressure vessel neutron fluence

The application of Monte Carlo methods for reactor pressure vessel (RPV) neutron fluence calculations is examined. As many commercial nuclear light water reactors approach the end of their design lifetime, it is of great consequence that reactor operators and regulators be able to characterize the structural integrity of the RPV accurately for financial reasons, as well as safety reasons, due to the possibility of plant life extensions. The Monte Carlo method, which offers explicit three-dimensional geometric representation and continuous energy and angular simulation, is well suited for this task. A model of the Three Mile Island unit 1 reactor is presented for determination of RPV fluence; Monte Carlo (MCNP) and deterministic (DORT) results are compared for this application; and numerous issues related to performing these calculations are examined. Synthesized three-dimensional deterministic models are observed to produce results that are comparable to those of Monte Carlo methods, provided the two methods utilize the same cross-section libraries. Continuous energy Monte Carlo methods are shown to predict more (15 to 20%) high-energy neutrons in the RPV than deterministic methods.

Wagner, J.C.; Haghighat, A.; Petrovic, B.G. [Pennsylvania State Univ., University Park, PA (United States). Nuclear Engineering Dept.

1996-06-01

288

NASA Astrophysics Data System (ADS)

This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

2014-06-01

289

NASA Astrophysics Data System (ADS)

Monte Carlo code for fast hydrogen atom transport and generating of excessively Doppler broadened profiles based on the collision model is presented. Results for the initial monoenergetic atom beam and for a more realistic energy distribution of H atoms are reported. Line profiles obtained from the simulation are compared to our experimentally obtained data. Initial energy distribution for atoms is approximately calculated from the measured line profiles while the initial angle distribution was taken to be cosine. Balmer alpha intensity was found to exponentially decay in the negative glow region, which concurs with the experimental results. These agreements between the simulation and experiment support the collision model for excessive line broadening.

Cvetanovi?, N.; Obradovi?, B. M.; Kuraica, M. M.

2009-02-01

290

NASA Astrophysics Data System (ADS)

Adopting the model differential cross sections used by Reid (1979) and by Haddad et al. (1981), an investigation to asses the discrepancies observed in the transverse diffusion coefficients D T and other transport properties are performed by the Monte-Carlo simulation. The results show that the values of ND T drastically vary with the change of anisotropy in the scattering property against the common sense that ND T is determined solely by the reduced field E/N under a given momentum transfer cross section. Cross sections so far derived from the D T/µ data may be necessary to be reassesed if anisotropy in the scattering properties is considered.

Yamamoto, Kohji; Ikuta, Nobuaki

1994-03-01

291

NASA Astrophysics Data System (ADS)

We present a perturbative approach to derive the semiclassical equations of motion for the two-dimensional electron dynamics under the simultaneous presence of static electric and magnetic fields, where the quantized Hall conductance is known to be directly related to the topological properties of translationally invariant magnetic Bloch bands. In close analogy to this approach, we develop a perturbative theory of two-dimensional photonic transport in gyrotropic photonic crystals to mimic the physics of quantum Hall systems. We show that a suitable permittivity grading of a gyrotropic photonic crystal is able to simulate the simultaneous presence of analog electric and magnetic field forces for photons, and we rigorously derive the topology-related term in the equation for the electromagnetic energy velocity that is formally equivalent to the electronic case. A possible experimental configuration is proposed to observe a bulk photonic analog to the quantum Hall physics in graded gyromagnetic photonic crystals.

Esposito, Luca; Gerace, Dario

2013-07-01

292

It recently has been shown experimentally that the focusing provided by a longitudinal nonuniform high magnetic field can significantly improve electron beam dose profiles. This could permit precise targeting of tumors near critical areas and minimize the radiation dose to surrounding healthy tissue. The experimental results together with Monte Carlo simulations suggest that the magnetic confinement of electron radiotherapy beams

Yu Chen; Alex F. Bielajew; Dale W. Litzenberg; Jean M. Moran; Frederick D. Becchetti

2005-01-01

293

NASA Astrophysics Data System (ADS)

We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

Schaefer, C.; Jansen, A. P. J.

2013-02-01

294

National Technical Information Service (NTIS)

Calculations of the electronic mobility and drift velocity have been carried out for bulk GaN and AlGaN-GaN heterojunctions based on a Monte Carlo approach. Very good agreement with available experiments has been obtained, and the calculations yielded a s...

R. P. Joshi

2004-01-01

295

Use of single scatter electron monte carlo transport for medical radiation sciences

The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

Svatos, Michelle M. (Oakland, CA)

2001-01-01

296

Several investigators have pointed out that electron and neutron contamination from high-energy photon beams are clinically important. The aim of this study is to assess electron and neutron contamination production by various prostheses in a high-energy photon beam of a medical linac. A 15 MV Siemens PRIMUS linac was simulated by MCNPX Monte Carlo (MC) code and the results of percentage depth dose (PDD) and dose profile values were compared with the measured data. Electron and neutron contaminations were calculated on the beam's central axis for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti hip prostheses through MC simulations. Dose increase factor (DIF) was calculated as the ratio of electron (neutron) dose at a point for 10 × 10 cm² field size in presence of prosthesis to that at the same point in absence of prosthesis. DIF was estimated at different depths in a water phantom. Our MC-calculated PDD and dose profile data are in good agreement with the corresponding measured values. Maximum dose increase factor for electron contamination for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti prostheses were equal to 1.18, 1.16, 1.16, and 1.14, respectively. The corresponding values for neutron contamination were respectively equal to: 184.55, 137.33, 40.66, and 43.17. Titanium-based prostheses are recommended for the orthopedic practice of hip junction replacement. When treatment planning for a patient with hip prosthesis is performed for a high-energy photon beam, attempt should be made to ensure that the prosthesis is not exposed to primary photons. PMID:24036859

Bahreyni Toossi, Mohammad Taghi; Behmadi, Marziyeh; Ghorbani, Mahdi; Gholamhosseinian, Hamid

2013-01-01

297

High-power beam transport through a hollow-core photonic bandgap fiber.

We investigate the use of a seven-cell hollow-core photonic bandgap fiber for transport of CW laser radiation from a single-mode, narrow-linewidth, high-power fiber laser amplifier. Over 90% of the amplifier output was coupled successfully and transmitted through the fiber in a near-Gaussian mode, with negligible backreflection into the source. 100 W of power was successfully transmitted continuously without damage and 160 W of power was transmitted briefly before the onset of thermal lensing in the coupling optics. PMID:24875992

Jones, D C; Bennett, C R; Smith, M A; Scott, A M

2014-06-01

298

For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray

Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI

2002-01-01

299

Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

NASA Technical Reports Server (NTRS)

The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

2010-01-01

300

The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. PMID:23877204

Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

2013-08-21

301

NASA Astrophysics Data System (ADS)

The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S.; Harrendorf, Marco A.; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

2013-08-01

302

Result of Monte-Carlo simulation of electron-photon cascades in lead and layers of lead-scintillator

NASA Astrophysics Data System (ADS)

Results of Monte-Carlo simulation of electromagnetic cascade development in lead and lead-scintillator sandwiches are analyzed. It is demonstrated that the structure function for core approximation is not applicable in the case in which the primary energy is higher than 100 GeV. The simulation data has shown that introducing an inhomogeneous chamber structure results in subsequent reduction of secondary particles.

Wasilewski, A.; Krys, E.

1985-08-01

303

The conversion coefficients from photon fluence to ambient dose equivalent, H* (10) and effective doses were calculated for photons up to 10 GeV. A Monte Carlo code EGS4 was used for these calculations and secondary particle transports were considered. The calculated ambient dose equivalents were compared to the calculated effective doses. The comparison shows that the ambient dose equivalents at

Osamu SATO; Nobuaki YOSHIZAWA; Shunji TAKAGI; Satoshi IWAI; Takashi UEHARA; Yukio SAKAMOTO; Yasuhiro YAMAGUCHI; Shun-ichi TANAKA

1999-01-01

304

Monte Carlo investigations of the wavelength dependence of light transport through turbid media

NASA Astrophysics Data System (ADS)

Elastic-scattering spectroscopy examines the wavelength dependence of light that has passed through a small volume of tissue. Measurements are typically made by placing two optical fibers on the surface of the tissue to be examined. The analysis of these measurements to obtain quantitative information about scattering and absorption is important to many biomedical applications such as cancer diagnosis and measurement of bilirubin concentrations. For fiber separations large enough for the diffusion approximation to be valid, this is straight forward. However, for clinical applications such as those listed above, the separation is too small for the diffusion approximation to be applicable. To obtain insight into the question of how scattering and absorption changes affect the wavelength dependence of the elastic-scatter signal Monte Carlo simulations have been used. First, it is shown that the Monte Carlo simulations and elastic-scatter measurements of polystyrene spheres agree quite well. Monte Carlo simulations are then used to investigate how particle size and concentration affect the elastic-scatter signal. It is found that the concentration has very little effect of the wavelength dependence, but that the size of the scattering particles does affect the wavelength dependence. In general, the signal decreases more rapidly as a function of wavelength for smaller particles.

Mourant, Judith R.

1997-12-01

305

Experimental benchmarks of the Monte Carlo code penelope

The physical algorithms implemented in the latest release of the general-purpose Monte Carlo code penelope for the simulation of coupled electron–photon transport are briefly described. We discuss the mixed (class II) scheme used to transport intermediate- and high-energy electrons and positrons and, in particular, the approximations adopted to account for the energy dependence of the interaction cross-sections. The reliability of

J. Sempau; J. M. Fernández-Varea; E. Acosta; F. Salvat

2003-01-01

306

NASA Astrophysics Data System (ADS)

A general approach for achieving consistency in the transport properties between direct simulation Monte Carlo (DSMC) and Navier-Stokes (CFD) solvers is presented for five-species air. Coefficients of species diffusion, viscosity, and thermal conductivities are considered. The transport coefficients that are modeled in CFD solvers are often obtained by expressions involving sets of collision integrals, which are obtained from more realistic intermolecular potentials (i.e., ab initio calculations). In this work, the self-consistent effective binary diffusion and Gupta et al.-Yos tranport models are considered. The DSMC transport coefficients are approximated from Chapman-Enskog theory in which the collision integrals are computed using either the variable hard sphere (VHS) and variable soft sphere (VSS) (phenomenological) collision cross section models. The VHS and VSS parameters are then used to adjust the DSMC transport coefficients in order to achieve a best-fit to the coefficients computed from more realistic intermolecular potentials over a range of temperatures. The best-fit collision model parameters are determined for both collision-averaged and collision-specific pairing approaches using the Nelder-Mead simplex algorithm. A consistent treatment of the diffusion, viscosity, and thermal conductivities is presented, and recommended sets of best-fit VHS and VSS collision model parameters are provided for a five-species air mixture.

Stephani, K. A.; Goldstein, D. B.; Varghese, P. L.

2012-07-01

307

The generation of photocurrent in organic solar cells starts with a photon being absorbed in the active layer and creating an excited electron\\/hole pair (exciton). The exciton is mobile and dissociates into electron and hole at an interface between donor and acceptor material, unless it decays before it reaches the interface. If they do not recombine, the charge carriers migrate

Vincent Robbiano; Jutta Luettmer-Strathmann

2011-01-01

308

A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport

We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.

Gentile, N A; Trahan, T J

2011-03-22

309

Correlated two-photon transport in a one-dimensional waveguide side-coupled to a nonlinear cavity

NASA Astrophysics Data System (ADS)

We investigate the transport properties of two photons inside a one-dimensional waveguide side-coupled to a single-mode nonlinear cavity. The cavity is filled with a nonlinear Kerr medium. Based on the Laplace transform method, we present an analytic solution for the quantum states of the two transmitted and reflected photons, which are initially prepared in a Lorentzian wave packet. The solution reveals how quantum correlation between the two photons emerges after the scattering by the nonlinear cavity. In particular, we show that the output wave function of the two photons in position space can be localized in relative coordinates, which is a feature that might be interpreted as a two-photon bound state in this waveguide-cavity system.

Liao, Jie-Qiao; Law, C. K.

2010-11-01

310

Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.

Poludniowski, Gavin G. [Joint Department of Physics, Division of Radiotherapy and Imaging, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT, United Kingdom and Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom); Evans, Philip M. [Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

2013-04-15

311

NASA Astrophysics Data System (ADS)

Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.

2014-04-01

312

Transport anisotropy of the pnictides studied via Monte Carlo simulations of the Spin-Fermion model

An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe{sub 2}As{sub 2} detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order.

Liang, Shuhua [ORNL; Alvarez, Gonzalo [ORNL; Sen, Cengiz [ORNL; Moreo, Adriana [ORNL; Dagotto, Elbio R [ORNL

2012-01-01

313

NASA Astrophysics Data System (ADS)

We explore theoretically the single-photon transport in a single-mode waveguide that is coupled to a hybrid atom-optomechanical system in a strong optomechanical coupling regime. Using a full quantum real-space approach, transmission and reflection coefficients of the propagating single-photon in the waveguide are obtained. The influences of atom-cavity detuning and the dissipation of atom on the transport are also studied. Intriguingly, the obtained spectral features can reveal the strong light-matter interaction in this hybrid system.

Jia, W. Z.; Wang, Z. D.

2013-12-01

314

Ion channels, as natures’ solution to regulating biological environments, are particularly interesting to device engineers seeking to understand how natural molecular systems realize device-like functions, such as stochastic sensing of organic analytes. What’s more, attaching molecular adaptors in desired orientations inside genetically engineered ion channels, enhances the system functionality as a biosensor. In general, a hierarchy of simulation methodologies is needed to study different aspects of a biological system like ion channels. Biology Monte Carlo (BioMOCA), a three-dimensional coarse-grained particle ion channel simulator, offers a powerful and general approach to study ion channel permeation. BioMOCA is based on the Boltzmann Transport Monte Carlo (BTMC) and Particle-Particle-Particle-Mesh (P3M) methodologies developed at the University of Illinois at Urbana-Champaign. In this paper, we have employed BioMOCA to study two engineered mutations of ?-HL, namely (M113F)6(M113C-D8RL2)1-?-CD and (M113N)6(T117C-D8RL3)1-?-CD. The channel conductance calculated by BioMOCA is slightly higher than experimental values. Permanent charge distributions and the geometrical shape of the channels gives rise to selectivity towards anions and also an asymmetry in I-V curves, promoting a rectification largely for cations.

Toghraee, Reza; Lee, Kyu-Il; Papke, David; Chiu, See-Wing; Jakobsson, Eric; Ravaioli, Umberto

2009-01-01

315

Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.

Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09

316

Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks

Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta#14;f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear #14;B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.

Kimin Kim, Jong-Kyu Park, Gerrit Kramer and Allen H. Boozer

2012-04-11

317

NASA Astrophysics Data System (ADS)

This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

Lodwick, Camille J.

318

A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification was performed for two cases: 6 MV head&neck and 15 MV prostate. The simulation is in agreement with film measurements within 2%/2 mm in the high dose regions (> or = 0.1 Gy = 5% D(max)) and 5%/2 mm in low dose regions (<0.1 Gy). PMID:17634643

Sikora, M; Dohm, O; Alber, M

2007-08-01

319

Carrier transport through a dry-etched InP-based two-dimensional photonic crystal

NASA Astrophysics Data System (ADS)

The electrical conduction across a two-dimensional photonic crystal (PhC) fabricated by Ar/Cl2 chemically assisted ion beam etching in n-doped InP is influenced by the surface potential of the hole sidewalls, modified by dry etching. Carrier transport across photonic crystal fields with different lattice parameters is investigated. For a given lattice period the PhC resistivity increases with the air fill factor and for a given air fill factor it increases as the lattice period is reduced. The measured current-voltage characteristics show clear ohmic behavior at lower voltages followed by current saturation at higher voltages. This behavior is confirmed by finite element ISE TCAD™ simulations. The observed current saturation is attributed to electric-field-induced saturation of the electron drift velocity. From the measured and simulated conductance for the different PhC fields we show that it is possible to determine the sidewall depletion region width and hence the surface potential. We find that at the hole sidewalls the etching induces a Fermi level pinning at about 0.12 eV below the conduction band edge, a value much lower than the bare InP surface potential. The results indicate that for n-InP the volume available for conduction in the etched PhCs approaches the geometrically defined volume as the doping is increased.

Berrier, A.; Mulot, M.; Malm, G.; Östling, M.; Anand, S.

2007-06-01

320

As a widely used numerical solution for the radiation transport equation (RTE), the discrete ordinates can predict the propagation of photons through biological tissues more accurately relative to the diffusion equation. The discrete ordinates reduce the RTE to a serial of differential equations that can be solved by source iteration (SI). However, the tremendous time consumption of SI, which is partly caused by the expensive computation of each SI step, limits its applications. In this paper, we present a graphics processing unit (GPU) parallel accelerated SI method for discrete ordinates. Utilizing the calculation independence on the levels of the discrete ordinate equation and spatial element, the proposed method reduces the time cost of each SI step by parallel calculation. The photon reflection at the boundary was calculated based on the results of the last SI step to ensure the calculation independence on the level of the discrete ordinate equation. An element sweeping strategy was proposed to detect the calculation independence on the level of the spatial element. A GPU parallel frame called the compute unified device architecture was employed to carry out the parallel computation. The simulation experiments, which were carried out with a cylindrical phantom and numerical mouse, indicated that the time cost of each SI step can be reduced up to a factor of 228 by the proposed method with a GTX 260 graphics card. PMID:21772362

Peng, Kuan; Gao, Xinbo; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; He, Xiaowei; Wang, Xiaorei; Liang, Jimin; Tian, Jie

2011-07-20

321

Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh

A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers.

Drumm, C.R.; Lorenz, J.

1999-03-02

322

Galerkin-based meshless methods for photon transport in the biological tissue.

As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170

Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min

2008-12-01

323

Characterization of photonic bandgap fiber for high-power narrow-linewidth optical transport

NASA Astrophysics Data System (ADS)

An investigation of the use of hollow-core photonic bandgap (PBG) fiber to transport high-power narrow-linewidth light is performed. In conventional fiber the main limitation in this case is stimulated Brillouin scattering (SBS) but in PBG fiber the overlap between the optical intensity and the silica that hosts the acoustic phonons is reduced. In this paper we show this should increase the SBS threshold to the multi-kW level even when including the non-linear interaction with the air in the core. A full model and experimental measurement of the SBS spectra is presented, including back-scatter into other optical modes besides the fundamental, and some of the issues of coupling high power into hollow-core fibers are discussed.

Bennett, Charlotte R.; Jones, David C.; Smith, Mark A.; Scott, Andrew M.; Lyngsoe, Jens K.; Jakobsen, Christian

2014-03-01

324

The energy band memory server algorithm for parallel Monte Carlo transport calculations

NASA Astrophysics Data System (ADS)

An algorithm is developed to significantly reduce the on-node footprint of cross section memory in Monte Carlo particle tracking algorithms. The classic method of per-node replication of cross section data is replaced by a memory server model, in which the read-only lookup tables reside on a remote set of disjoint processors. The main particle tracking algorithm is then modified in such a way as to enable efficient use of the remotely stored data in the particle tracking algorithm. Results of a prototype code on a Blue Gene/Q installation reveal that the penalty for remote storage is reasonable in the context of time scales for real-world applications, thus yielding a path forward for a broad range of applications that are memory bound using current techniques.

Felker, Kyle G.; Siegel, Andrew R.; Smith, Kord S.; Romano, Paul K.; Forget, Benoit

2014-06-01

325

Background: Impaired serotonin transmission has been implicated in the pathophysiology of eating disorders. We investigated the in vivo availability of brain serotonin transporters and dopamine transporters in bulimia nervosa patients.Methods: Approximately 24 hours after injection of [123I]-2?-carbomethoxy-3?-(4-iodophenyl)tropane ([123I] ?-CIT), single photon emission computed tomography scans were performed in 10 medication-free, female bulimic patients and 10 age-matched, healthy females. For quantification

Johannes Tauscher; Walter Pirker; Matthäus Willeit; Martina de Zwaan; Ursula Bailer; Alexander Neumeister; Susanne Asenbaum; Claudia Lennkh; Nicole Praschak-Rieder; Thomas Brücke; Siegfried Kasper

2001-01-01

326

NASA Astrophysics Data System (ADS)

Multi-carrier Monte Carlo simulation of charge transport is employed to test the suitability of Meyer-Neldel rule (MNR) in extracting energetic disorder from homogeneous organic thin films in diode geometry. Studies validate the use of MN rule for extracting energetic disorder from homogeneous organic thin films.

Mohan, S. Raj; Singh, Manoranjan P.; Joshi, M. P.; Kukreja, L. M.

2013-02-01

327

We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.

Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang [State Key Lab for Mesoscopic Physics, School of Physics, Peking University, Beijing 100871 (China)

2011-07-15

328

NASA Astrophysics Data System (ADS)

Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.

Ding, D.; Chen, X.; Minnich, A. J.

2014-04-01

329

Implementation of the Doppler broadening of a Compton-scattered photon into the EGS4 code

A modification to the general-purpose Monte Carlo electron-photon transport code EGS4 [1] was made in order to include Doppler broadening of Compton-scattered photon energy due to electron pre-collision motion. The Compton-scattered photon energy is sampled from a cross section formula based on the Compton profile, and the Compton scattering is sustained if the energy imparted to the electron is less

Y. Namito; S. Ban; H. Hirayama

1994-01-01

330

{delta}f Monte Carlo calculation of neoclassical transport in perturbed tokamaks

Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new {delta}f particle orbit code (POCA) has been developed for this purpose using a modified pitch-angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in the axisymmetric configuration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and the results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic field spectrum, and compared with the combined and 1/{nu} NTV theory. Calculations indicate a clear {delta}B{sup 2} scaling of NTV, and good agreement with the theory on NTV torque profiles and amplitudes depending on collisionality.

Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit J. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Boozer, Allen H. [Columbia University, New York, New York 10027 (United States)

2012-08-15

331

In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.

Naff, R. L.; Haley, D. F.; Sudicky, E. A.

1998-01-01

332

A simplified spherical harmonic method for coupled electron-photon transport calculations

In this thesis the author has developed a simplified spherical harmonic method (SP{sub N} method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP{sub N} method has never before been applied to charged-particle transport. He has performed a first time Fourier analysis of the source iteration scheme and the P{sub 1} diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP{sub N} equations. The theoretical analyses indicate that the source iteration and P{sub 1} DSA schemes are as effective for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. In addition, he has applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well as for the 2-D SP{sub N} equations as for the 1-D S{sub N} equations. It has previously been shown for 1-D S{sub N} calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. The author has investigated the applicability of the SP{sub N} approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems.

Josef, J.A.

1997-12-01

333

Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.

Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596

Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred

2012-02-01

334

NASA Astrophysics Data System (ADS)

Space and ground level electronic equipment with semiconductor devices are always subjected to the deleterious effects by radiation. The study of ion-solid interaction can show the radiation effects of scattering and stopping of high speed atomic particles when passing through matter. This study had been of theoretical interest and of practical important in these recent years, driven by the need to control material properties at nanoscale. This paper is attempted to present the calculations of final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production of alpha (?) particle in Gallium Arsenide(GaAs) material. This calculation is being simulated using the Monte Carlo simulation, SRIM (Stopping and Range of Ions in Matter). The comparison of radiation tolerance between the conventional scale and nanoscale GaAs layer will be discussed as well. From the findings, it is observed that most of the damage formed in the GaAs layer induced by the production of lattice defects in the form of vacancies, defect clusters and dislocations. However, when the GaAs layer is scaled down (nanoscaling), it is found that the GaAs layer can withstand higher radiation energy, in term of displacement damage.

Amir, Haider F. Abdul; Chee, Fuei Pien

2012-09-01

335

NASA Astrophysics Data System (ADS)

Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

2014-06-01

336

In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of 99mTc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters’ analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses.

Islamian, Jalil Pirayesh; Toossi, Mohammad Taghi Bahreyni; Momennezhad, Mahdi; Zakavi, Seyyed Rasoul; Sadeghi, Ramin; Ljungberg, Michael

2012-01-01

337

A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

NASA Technical Reports Server (NTRS)

We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

Schnittman, Jeremy David; Krolik, Julian H.

2013-01-01

338

NASA Astrophysics Data System (ADS)

We consider two phase flow and transport in heterogeneous porous media with uncertain permeability distribution. The resulting transport uncertainty is assessed by means of multilevel Monte Carlo (MLMC). In contrast to the Monte Carlo (MC) method, which operates on one specific numerical grid with one numerical solver, MLMC samples from a hierarchy of grids or numerical solvers. In this work, the MLMC performance resulting from a hierarchy consisting of a finite volume transport solver and a streamline-based solver is compared to a purely grid-based hierarchy. Unlike the established grid-based MLMC method, our solver-based MLMC method operates on the same numerical grid and therefore avoids difficulties related to the upscaling of permeability fields or boundary conditions on coarser grids. For a two dimensional test case with log-normal permeability distribution, both MLMC approaches are compared to a MC reference run. At equivalent accuracy, significant speedups of MLMC with respect to MC are achieved.

Müller, Florian; Meyer, Daniel W.; Jenny, Patrick

2014-07-01

339

Monte Carlo modeling of the spatially dispersive carrier transport in P3HT and P3HT:PCBM blends

NASA Astrophysics Data System (ADS)

The presence of traps, arising from morpohological or chemical defects, can be critical to the performance of organic semiconductor devices. Traps can reduce the charge carrier mobility, disturb the internal electrical field, drive recombination, and reduce the overall device efficiency as well as operational stability. In this work, we investigate the role of traps in determining charge transport properties of organic semiconductors and blends such as P3HT and P3HT:PCBM through Monte-Carlo (MC) simulations in conjunction with time-of-flight (TOF) mobility measurements. We employ a Marcus theory description of individual hopping events based on the molecular reorganization energy (lambda) for the MC simulations. Trap states are modeled as diffuse bands that reside at some energy away from the main transport band. This model is used to simulate TOF transients, and the results are compared to experimental data. As is expected from the Marcus theory equation, the mobility is seen to be maximum for an optimal value of lambda. This optimal value is strongly field dependent, but is found to be independent of the trap density. In comparing MC simulations with TOF data, it is found that inclusion of traps results in a much better fit to the data and provides for a mechanism to simulate dispersive transport with a long tail resulting from trapping and detrapping of carriers before they exit the device. We present results for a range of trap densities and statistical distributions and discuss the implications on the operation of bulk heterojunction organic photovoltaic devices.

Jiang, Xin

2009-10-01

340

Status of Monte Carlo at Los Alamos

At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

Thompson, W.L.; Cashwell, E.D.

1980-01-01

341

This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

Chin, P.W. [Department of Medical Physics, Velindre Cancer Centre, Velindre Road, Cardiff CF14 2TL (United Kingdom)]. E-mail: mary.chin@physics.org

2005-10-15

342

A sophisticated SPECT (single-photon-emission computed tomography) simulation package has been developed, permitting full tomographic acquisition of data from physically realistic nonuniform and asymmetric 3-D source objects. The package is based on the Los Alamos code MCNP (Monte Carlo for Neutron-Photon transport), which has been extensively modified by the authors to allow complete collimator and source modeling and direct manipulation of

J. C. Yanch; A. B. Dobrzeniecki

1993-01-01

343

Electron Transport in Silicon Nanocrystal Devices: From Memory Applications to Silicon Photonics

NASA Astrophysics Data System (ADS)

The push to integrate the realms of microelectronics and photonics on the silicon platform is currently lacking an efficient, electrically pumped silicon light source. One promising material system for photonics on the silicon platform is erbium-doped silicon nanoclusters (Er:Si-nc), which uses silicon nanoclusters to sensitize erbium ions in a SiO2 matrix. This medium can be pumped electrically, and this thesis focuses primarily on the electrical properties of Er:Si-nc films and their possible development as a silicon light source in the erbium emission band around 1.5 micrometers. Silicon nanocrystals can also be used as the floating gate in a flash memory device, and work is also presented examining charge transport in novel systems for flash memory applications. To explore silicon nanocrystals as a potential replacement for metallic floating gates in flash memory, the charging dynamics in silicon nanocrystal films are first studied using UHV-AFM. This approach uses a non-contact AFM tip to locally charge a layer of nanocrystals. Subsequent imaging allows the injected charge to be observed in real time as it moves through the layer. Simulation of this interaction allows the quantication of the charge in the layer, where we find that each nanocrystal is only singly charged after injection, while holes are retained in the film for hours. Work towards developing a dielectric stack with a voltage-tunable barrier is presented, with applications for flash memory and hyperspectral imaging. For hyperspectral imaging applications, film stacks containing various dielectrics are studied using I-V, TEM, and internal photoemission, with barrier tunability demonstrated in the Sc2O3/SiO2 system. To study Er:Si-nc as a potential lasing medium for silicon photonics, a theoretical approach is presented where Er:Si-nc is the gain medium in a silicon slot waveguide. By accounting for the local density of optical states effect on the emitters, and carrier absorption due to electrical pumping, it is shown that a pulsed excitation method is needed to achieve gain in this system. A gain of up to 2 db/cm is predicted for an electrically pumped gain medium 50 nm thick. To test these predictions Er:Si-nc LEDs were fabricated and studied. Reactive oxygen sputtering is found to produce more robust films, and the electrical excitation cross section found is two orders of magnitude larger than the optical cross section. The fabricated devices exhibited low lifetimes and low current densities which prevent observation of gain, and the modeling is used to predict how the films must be improved to achieve gain and lasing in this system.

Miller, Gerald M.

344

Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy.

Henry Wang; Yunzhi Ma; Guillem Pratx; Lei Xing

2011-01-01

345

Experimentally measured scatter fractions and energy spectra as a test of Monte Carlo simulations

A method for the validation of Monte Carlo photon transport calculations is presented, with particular emphasis on the scatter component of such calculations. The method is based on a quantitative comparison of calculated and experimental scatter fractions. In addition, the method includes a qualitative comparison of point spread functions and energy spectra. An application of the method is demonstrated by

S. H. Manglos; C. E. Floyd; R. J. Jaszczak; K. L. Greer; C. C. Harris; R. E. Coleman

1987-01-01

346

A Monte Carlo simulation of low energy photoelectron scattering in Cs 2Te

The quantitative description of low kinetic energy photoelectron emission in semiconductors is still an open question. In this article a model is developed to simulate the photoexcitation and transport of low kinetic energy electrons in Cs2Te. The statistical extension of the model, by Monte Carlo trajectory calculations, gives photon energy dependent quantum yields in agreement with experimental data. This is

Gabriele Ferrini; Paolo Michelato; Fulvio Parmigiani

1998-01-01

347

Overview of the MCU Monte Carlo Software Package

NASA Astrophysics Data System (ADS)

MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

2014-06-01

348

REVIEW: Fifty years of Monte Carlo simulations for medical physics

NASA Astrophysics Data System (ADS)

Monte Carlo techniques have become ubiquitous in medical physics over the last 50 years with a doubling of papers on the subject every 5 years between the first PMB paper in 1967 and 2000 when the numbers levelled off. While recognizing the many other roles that Monte Carlo techniques have played in medical physics, this review emphasizes techniques for electron-photon transport simulations. The broad range of codes available is mentioned but there is special emphasis on the EGS4/EGSnrc code system which the author has helped develop for 25 years. The importance of the 1987 Erice Summer School on Monte Carlo techniques is highlighted. As an illustrative example of the role Monte Carlo techniques have played, the history of the correction for wall attenuation and scatter in an ion chamber is presented as it demonstrates the interplay between a specific problem and the development of tools to solve the problem which in turn leads to applications in other areas. This paper is dedicated to W Ralph Nelson and to the memory of Martin J Berger, two men who have left indelible marks on the field of Monte Carlo simulation of electron-photon transport.

Rogers, D. W. O.

2006-07-01

349

NASA Astrophysics Data System (ADS)

In this paper, we present experimental results about the scattering of 4 keV Li + ions by a polycrystalline nickel surface. Incidence angle was ?=4° and different values of the scattering angle ? are considered. Two simulation methods are used in order to calculate the angular distributions of the total path length in solids, the reflection coefficient and the energetic scattering spectra of reflected particles. The first method is based on a Monte Carlo code (TRIM). The second method is based on the solution of the Boltzmann equation in the transport theory frame and is valuable for low incidence and scattering angles. In both cases, the binary-collision approximation is assumed and multiple scattering of incident particles is included. Comparison between simulated curves is done without any normalization and shows good agreement. It is important to note here, that comparison of simulated energetic spectra to experimental ones allows the determination of inelastic stopping power (d E/d x) ine which is difficult to evaluate otherwise in this energy range.

Khalal-Kouache, K.; Chami, A. C.; Boudjema, M.; Benoit-Cattin, P.; Benazeth, C.; Boudouma, Y.

2001-10-01

350

Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.

Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide, South Australia 5000 (Australia); Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide South Australia 5000 (Australia)

2013-07-15

351

Vectorizing and macrotasking Monte Carlo neutral particle algorithms

Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

Heifetz, D.B.

1987-04-01

352

A directly modulated CATV\\/radio-on-fiber (ROF) transport system based on light injection and optoelectronic feedback techniques, and photonic crystal fiber (PCF) is proposed and demonstrated. Excellent performances of carrier-to-noise ratio (CNR), composite second order (CSO) and composite triple beat (CTB) were obtained for CATV band; as well as low bit error rate (BER) and third order intermodulation distortion to carrier ratio

Hai-Han Lu; Cheng-Ling Ying; Wen-I. Lin; Yao-Wei Chuang; Yu-Chieh Chi; Sha-Jye Tzeng

2007-01-01

353

NASA Astrophysics Data System (ADS)

A directly modulated CATV/radio-on-fiber (ROF) transport system based on light injection and optoelectronic feedback techniques, and photonic crystal fiber (PCF) is proposed and demonstrated. Excellent performances of carrier-to-noise ratio (CNR), composite second order (CSO) and composite triple beat (CTB) were obtained for CATV band; as well as low bit error rate (BER) and third order intermodulation distortion to carrier ratio (IMD 3/ C) values were achieved for ROF band. This demonstrated that such a CATV/ROF transport system is very attractive for the fiber backbone applications.

Lu, Hai-Han; Ying, Cheng-Ling; Lin, Wen-I.; Chuang, Yao-Wei; Chi, Yu-Chieh; Tzeng, Sha-Jye

2007-05-01

354

NASA Astrophysics Data System (ADS)

A hybrid system containing an asymmetrical waveguide coupled to a whispering-gallery resonator embedded with a two-level atom is designed to investigate single-photon transport properties. The transmission and reflection amplitudes are obtained via the discrete coordinates approach. Numerical simulation demonstrates that a trifrequency photon attenuator is realized by controlling the couplings between the asymmetrical waveguide and the whispering-gallery resonator. The phase shift, group delay and dissipation effects of the transmitted single-photon are also discussed.

Zhou, Tao; Zang, Xiao-Fei; Xu, Dan-Hua

2014-04-01

355

The F{sub N} basis function expansion solution to the Boltzmann transport equation in Cartesian geometry is summarized and evaluated for several heterogeneous slabs of interest. The resultant scalar and angular fluxes and the critical slab thickness (when applicable) compare to the Monte Carlo transport evaluations by MCNP. A correspondence between the one-group macroscopic cross section used in the FN code is made to energy independent synthetic MCNP microscopic cross sections. The FN method produces comparable results to MCNP, requires fewer computer resources, but is limited to specific problem types.

Singleterry, R.C. Jr. [Argonne National Lab., Idaho Falls, ID (United States); Jahshan, S. [SNJ Consulting, Idaho Falls, ID (United States)

1996-04-01

356

A full band Monte-Carlo study of carrier transport properties of InAlN lattice matched to GaN

NASA Astrophysics Data System (ADS)

The growing importance of In0:18Al0:82N stems from the fact that it can be grown lattice matched to GaN and for its potential applications in a large number of electronics and optoelectronics devices. In this work we employed a full band Monte-Carlo approach to study the carrier transport properties of this alloy. We have computed the temperature and doping dependent electron and hole mobilities and drift velocities. Furthermore, for both sets of transport coefficients we have developed a number of analytical expressions that can be easily incorporated in drift-diffusion type simulation codes.

Shishehchi, Sara; Bertazzi, Francesco; Bellotti, Enrico

2013-03-01

357

Simultaneous 3 × 10-Gbps error-free photonic transmissions with clear eye-openings are demonstrated in the 1-?m, C-, and L-wavebands by using an ultrabroad-waveband photonic transport system comprising a 3.3-km-long holey fiber transmission line.

Naokatsu Yamamoto; Yu. Omigawa; Kouichi Akahane; Tetsuya Kawanishi; Hideyuki Sotobayashi

2010-01-01

358

NASA Astrophysics Data System (ADS)

Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4???/S<2.5/4?, in the temperature range from 170 to 440 MeV.

Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.

2013-03-01

359

Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

2012-07-01

360

NASA Astrophysics Data System (ADS)

Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.

Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.

2010-12-01

361

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

362

MCNP/X TRANSPORT IN THE TABULAR REGIME

The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

HUGHES, H. GRADY [Los Alamos National Laboratory

2007-01-08

363

NASA Astrophysics Data System (ADS)

We show that Monte Carlo simulations of neutral particle transport in planar-geometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new "Boltzmann Monte Carlo" (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (i) using an angular biasing function that is closely related to the "asymptotic" solution of the linear Boltzmann equation and (ii) requiring isotropic weight changes at collisions, we derive a new angular biasing scheme. Using the BMC equation, we propose a universal "safe" upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, we demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous.

Ueki, Taro; Larsen, Edward W.

1998-09-01

364

Monte Carlo simulations in SPET and PET

Monte Carlo methods are extensively used in Nuclear Medicine to tackle a variety of problems that are diffi- cult to study by an experimental or analytical approach. A review of the most recent tools allowing application of Monte Carlo methods in single photon emission tomography (SPET) and positron emission tomography (PET) is presented. To help potential Monte Carlo users choose

I. Buvat; I. Castiglioni

2002-01-01

365

NASA Astrophysics Data System (ADS)

A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.

Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg

2014-06-01

366

We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 ?J, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 ?J laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air. PMID:22859102

Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F

2012-08-01

367

NASA Astrophysics Data System (ADS)

We present a semiconductor master equation technique to study the input/output characteristics of coherent photon transport in a semiconductor waveguide-cavity system containing a single quantum dot. We use this approach to investigate the effects of photon propagation and anharmonic cavity-QED for various dot-cavity interaction strengths, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down, even in the weak coupling regime. As a measure of the multiphoton correlations, we compute the Fano factor and the correlation error associated with making a semiclassical approximation. We also explore the role of electron-acoustic-phonon scattering and find that phonon-mediated scattering plays a qualitatively important role on the light propagation characteristics. As an application of the theory, we simulate a conditional phase gate at a phonon bath temperature of 20 K in the strong coupling regime.

Hughes, S.; Roy, C.

2012-01-01

368

The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

Cramer, S.N.

1984-01-01

369

NASA Astrophysics Data System (ADS)

The photon counting detector based on cadmium telluride (CdTe) or cadmium zinc telluride (CZT) is a promising imaging modality that provides many benefits compared to conventional scintillation detectors. By using a pinhole collimator with the photon counting detector, we were able to improve both the spatial resolution and the sensitivity. The purpose of this study was to evaluate the photon counting and conventional scintillation detectors in a pinhole single-photon emission computed tomography (SPECT) system. We designed five pinhole SPECT systems of two types: one type with a CdTe photon counting detector and the other with a conventional NaI(Tl) scintillation detector. We conducted simulation studies and evaluated imaging performance. The results demonstrated that the spatial resolution of the CdTe photon counting detector was 0.38 mm, with a sensitivity 1.40 times greater than that of a conventional NaI(Tl) scintillation detector for the same detector thickness. Also, the average scatter fractions of the CdTe photon counting and the conventional NaI(Tl) scintillation detectors were 1.93% and 2.44%, respectively. In conclusion, we successfully evaluated various pinhole SPECT systems for small animal imaging.

Lee, Young-Jin; Park, Su-Jin; Lee, Seung-Wan; Kim, Dae-Hong; Kim, Ye-Seul; Kim, Hee-Joung

2013-05-01

370

NASA Astrophysics Data System (ADS)

We study theoretically the possible origin of a double-peak fine structure of Surface Relief Gratings in azo-functionalized poly(etherimide) reported recently in experiments. To improve the statistics of experimental data additional measurements were done. For the theoretical analysis we develop a stochastic Monte Carlo model for photoinduced mass transport in azobenzene-functionalized polymer matrix. The long sought-after transport of polymer chains from bright to dark places of the illumination pattern is demonstrated and characterized, various scenarios for the intertwined processes of build-up of density and SRG gratings are examined. Model predicts that for some azo-functionalized materials double-peak SRG maxima can develop in the permanent, quasi-permanent or transient regimes. Available experimental data are interpreted in terms of model's predictions.

Pawlik, G.; Miniewicz, A.; Sobolewska, A.; Mitus, A. C.

2014-01-01

371

NASA Astrophysics Data System (ADS)

The limiting factors for the scintigraphic clinical application are related to i) biosource characteristics (pharmacokinetic of the drug distribution between organs), Detection chain (photons transport, scintillation, analog to digital signal conversion, etc.) Imaging (Signal to Noise ratio, Spatial and Energy Resolution, Linearity etc) In this work, by using Monte Carlo time resolved transport simulations on a mathematical phantom and on a small field of view scintigraphic device, the trade off between the aforementioned factors was preliminary investigated.

Burgio, N.; Ciavola, C.; Santagata, A.; Iurlaro, G.; Montani, L.; Scafè, R.

2006-04-01

372

The role of plasma evolution and photon transport in optimizing future advanced lithography sources

Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.

Sizyuk, Tatyana; Hassanein, Ahmed [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)] [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

2013-08-28

373

Modelling plastic scintillator response to gamma rays using light transport incorporated FLUKA code.

The response function of NE102 plastic scintillator to gamma rays has been simulated using a joint FLUKA+PHOTRACK Monte Carlo code. The multi-purpose particle transport code, FLUKA, has been responsible for gamma transport whilst the light transport code, PHOTRACK, has simulated the transport of scintillation photons through scintillator and lightguide. The simulation results of plastic scintillator with/without light guides of different surface coverings have been successfully verified with experiments. PMID:22341953

Ranjbar Kohan, M; Etaati, G R; Ghal-Eh, N; Safari, M J; Afarideh, H; Asadi, E

2012-05-01

374

Single-photon emission tomographic (SPET) imaging with the radiotracer [123I]2ß-carbomethoxy-3ß-(4-iodophenyl)tropane ([123I]ß-CIT) has been reported to be a useful in vivo measure of dopamine (DA) transporters. However, in addition to its high DA transporter affinity,ß-CIT also binds with high affinity to serotonin (5-HT) transporters. 2ß-Carboisopropoxy-3ß-(4-iodophenyl)tropane (IPCIT) has been demonstrated by in vitro studies to have higher selectivity for the DA transporter. We

B. Ellen Scanley; Mohammed S. Al-Tikriti; Mitchell S. Gandelman; Marc Laruellel; Yolanda Zea-Ponce; Ronald M. Baldwin; Sami S. Zoghbi; Paul B. Hoffer; Dennis S. Charney; Shayoin Wang; John L. Neumeyer; Robert B. Innis

1995-01-01

375

Validation of a new deterministic transport code for SPECT simulation

The simulation of single photon emission computed tomography (SPECT) has traditionally been done using Monte Carlo methods. However, the hybrid deterministic transport code TITAN is being benchmarked for the simulation of SPECT. The TITAN code is referred to as “hybrid” because it uses a discrete ordinates method in the phantom and a simplified ray-tracing algorithm in the air outside of

K. K. Royston; A. Haghighat; C. Yi; A. Cebula; D. Gilland

2010-01-01

376

We analyze the dynamics of single-photon transport in a single-mode waveguide coupled to a micro-optical resonator by using a fully quantum-mechanical model. We examine the propagation of a single-photon Gaussian packet through the system under various coupling conditions. We review the theory of single-photon transport phenomena as applied to the system and we develop a discussion on the numerical technique we used to solve for dynamical behavior of the quantized field. To demonstrate our method and to establish robust single-photon results, we study the process of adiabatically lowering or raising the energy of a single photon trapped in an optical resonator under active tuning of the resonator. We show that our fully quantum-mechanical approach reproduces the semiclassical result in the appropriate limit and that the adiabatic invariant has the same form in each case. Finally, we explore the trapping of a single photon in a system of dynamically tuned, coupled optical cavities.

Hach, Edwin E. III [Department of Physics, Rochester Institute of Technology, 85 Lomb Memorial Drive, Rochester, New York 14623 (United States); Elshaari, Ali W.; Preble, Stefan F. [Microsystems Engineering, Rochester Institute of Technology, 77 Lomb Memorial Drive, Rochester, New York 14623 (United States)

2010-12-15

377

A new grid-based Boltzmann equation solver, Acuros™, was developed specifically for performing accurate and rapid radiotherapy dose calculations. In this study we benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media. Acuros solves the coupled Boltzmann transport equations for neutral and charged particles on a locally adaptive Cartesian grid. The Acuros solver is

Oleg N. Vassiliev; Todd A. Wareing; John McGhee; Gregory Failla; Mohammad R. Salehpour; Firas Mourtada

2010-01-01

378

NASA Astrophysics Data System (ADS)

An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe2As2 detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order.

Liang, Shuhua; Alvarez, Gonzalo; ?en, Cengiz; Moreo, Adriana; Dagotto, Elbio

2012-07-01

379

An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe(2)As(2) detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order. PMID:23006104

Liang, Shuhua; Alvarez, Gonzalo; ?en, Cengiz; Moreo, Adriana; Dagotto, Elbio

2012-07-27

380

An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied via Monte Carlo techniques in two-dimensional clusters. At low temperatures, the magnetic and one-particle spectral properties are in agreement with neutron and photoemission experiments. Our main results are the resistance versus temperature curves that display the same features observed in BaFe2As2 detwinned single crystals (under uniaxial stress), including a low-temperature anisotropy between the two directions followed by a peak at the magnetic ordering temperature, that qualitatively appears related to short-range spin order and concomitant Fermi surface orbital order.

Liang, Shuhua [ORNL; Alvarez, Gonzalo [ORNL; Sen, Cengiz [ORNL; Moreo, Adriana [ORNL; Dagotto, Elbio R [ORNL

2012-01-01

381

Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography.

National Technical Information Service (NTIS)

The laser-tissue interaction code LATIS is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiments performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein aniso...

P. Amendt K. Estabrook M. Everett R. A. London D. Maitland G. Zimmerman B. Colston L. da Silva U. Sathnyam

2000-01-01

382

Semiconductor nanoparticles have a wide absorption band and small reabsorption probability, which makes them great candidates for luminescent solar concentrators (LSCs). We use Monte-Carlo simulations of photon transport to predict the performance of LSCs based on “type-II” CdSe-CdTe quantum dots. These computations suggest that semiconductor-based LSCs can be highly efficient. The optimum performance is reached with a fairly long LSC

Boaz Ilan; David F. Kelley

2011-01-01

383

Monte Carlo Simulator to Study High Mass X-ray Binary System

We have developed a Monte Carlo simulator for astrophysical objects, which incorporate the transportation of X-ray photons in photoionized plasma. We applied the code to X-ray spectra of high mass X-ray binaries, Vela X-1 and GX 301-2, obtained with Chandra HETGS. By utilizing the simulator, we have successfully reproduced many emission lines observed from Vela X-1. The ionization structure and

Shin Watanabe; Fumiaki Nagase; Tadayuki Takahashi; Masao Sako; Steve M. Kahn; Manabu Ishida; Yoshitaka Ishisaki; T. Kohmura; F. Paerels

2005-01-01

384

Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. PMID:24727908

Periyasamy, Vijitha; Pramanik, Manojit

2014-04-01

385

NASA Astrophysics Data System (ADS)

Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

Alexander, Andrew William

386

MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

Marcus, Ryan C. [Los Alamos National Laboratory

2012-07-25

387

A new method is presented to decouple the parameters of the incident e- beam hitting the target of the linear accelerator, which consists essentially in optimizing the agreement between measurements and calculations when the difference filter, which is an additional filter inserted in the linac head to obtain uniform lateral dose-profile curves for the high energy photon beam, and flattening

B. DeSmedt; N. Reynaert; F. Flachet; M. Coghe; M. G. Thompson; L. Paelinck; G. Pittomvils; C. DeWagter; W. DeNeve; H. Thierens

2005-01-01

388

The control of the electron temperature and charged particle transport in negative hydrogen ion sources has a crucial role for the performance of the system. It is usually achieved by the use of a magnetic filter--localized transverse magnetic field, which reduces the electron temperature and enhances the negative ion yield. There are several works in literature on modeling of the magnetic filter effects based on fluid and kinetic modeling, which, however, suggest rather different mechanisms responsible for the electron cooling and particle transport through the filter. Here a kinetic modeling of the problem based on the particle-in-cell with Monte Carlo collisions method is presented. The charged particle transport across a magnetic filter is studied in hydrogen plasmas with and without including volume production of negative ions, in a one-dimensional Cartesian geometry. The simulation shows a classical (collisional) electron diffusion across the magnetic filter with reduction in the electron temperature but no selective effect in electron energy is observed (Coulomb collisions are not considered). When a bias voltage is applied, the plasma is split into an upstream electropositive and a downstream electronegative regions. Different configurations with respect to bias voltage and magnetic field strength are examined and discussed. Although the bias voltage allows negative ion extraction, the results show that volume production of negative ions in the downstream region is not really enhanced by the magnetic filter.

Kolev, St.; Hagelaar, G. J. M.; Boeuf, J. P. [Laboratoire Plasma et Conversion d'Energie (LAPLACE), Universite Paul Sabatier, Bt. 3R2, 118 Route de Narbonne, 31062 Toulouse Cedex 9 (France)

2009-04-15

389

Thin membranes, under appropriate boundary conditions, can self-assemble into vesicles, nanoscale bubbles that encapsulate and hence protect or transport molecular payloads. In this paper, we review the types and applications of light fields interacting with vesicles. By encapsulating light-emitting molecules (e.g. dyes, fluorescent proteins, or quantum dots), vesicles can act as particles and imaging agents. Vesicle imaging can take place also under second harmonic generation from vesicle membrane, as well as employing mass spectrometry. Light fields can also be employed to transport vesicles using optical tweezers (photon momentum) or directly pertrurbe the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy).

Vasdekis, Andreas E.; Scott, E. A.; Roke, Sylvie; Hubbell, J. A.; Psaltis, D.

2013-04-03

390

The Monte Carlo method is a well established approach for the statistical solution of the Boltz- mann transport equation in semiconductors (l, 21. As device dimensions are reduced, it is important to account for hot electron effects, responsible for overshoot phenomena and reliability problems like breakdown due to impact ionization, defect generation, and injection into gate oxides. In some cases,

C. H. Lee; U. Ravaioli

391

NASA Astrophysics Data System (ADS)

The crucial problem for radiation shielding design at heavy ion accelerator facilities with beam energies of several GeV/n is the source term problem. Experimental data on double differential neutron yields from thick targets irradiated with high-energy uranium nuclei are lacking. At present there are not many Monte Carlo multipurpose codes that can work with primary high-energy uranium nuclei. These codes use different physical models for simulating nucleus-nucleus reactions. Therefore, verification of the codes with available experimental data is very important for selection of the most reliable code for practical tasks. This paper presents comparisons of the FLUKA, GEANT4 and SHIELD code simulations with experimental data on neutron production at 1 GeV/n 238U beam interaction with a thick Fe target.

Beskrovnaia, L.; Florko, B.; Paraipan, M.; Sobolevsky, N.; Timoshenko, G.

2008-09-01

392

Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code

DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.

Rhoades, W.A.; Childs, R.L.

1982-07-01

393

MCNP{trademark} Monte Carlo: A precis of MCNP

MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

Adams, K.J.

1996-06-01

394

A patch to the Los Alamos Monte Carlo code MCNP has been developed that automates the generation of source descriptions for photons from arbitrary mixtures and configurations of radioactive isotopes. Photon branching ratios for decay processes are obtained from national and international data bases and accesed directly from computer files. Code user input is generally confined to readily available information such as density, isotopic weight fractions, atomic numbers, etc. of isotopes and material compositions. The availbility of this capability in conjunction with the ''generalized source'' capability of MCNP Version 3A makes possible the rapid and accurate description of photon sources from complex mixtures and configurations of radioactive materials, resulting in imporved radiation transport predictive capabilities. This capability is combined with a first - principles calculation of photon spectrometer response - functions for NaI, BGO, and HPGe for E..gamma.. )approxreverse arrowlt) 1 MeV. 25 refs., 1 fig., 4 tabs.

Estes, G.P.; Schrandt, R.G.; Kriese, J.T.

1988-03-01

395

Monte Carlo modeling of the spatially dispersive carrier transport in P3HT and P3HT:PCBM blends

The presence of traps, arising from morpohological or chemical defects, can be critical to the performance of organic semiconductor devices. Traps can reduce the charge carrier mobility, disturb the internal electrical field, drive recombination, and reduce the overall device efficiency as well as operational stability. In this work, we investigate the role of traps in determining charge transport properties of

Xin Jiang

2009-01-01

396

Two-photon transport in a waveguide coupled to a cavity in a two-level system

We study two-photon effects for a cavity quantum electrodynamics system where a waveguide is coupled to a cavity embedded in a two-level system. The wave function of two-photon scattering is exactly solved by using the Lehmann-Symanzik-Zimmermann reduction. Our results about quantum statistical properties of the outgoing photons explicitly exhibit the photon blockade effects in the strong-coupling regime. These results agree with the observations of recent experiments.

Shi, T.; Sun, C. P. [Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Fan Shanhui [Ginzton Laboratory, Stanford University, Stanford, California 94305 (United States)

2011-12-15

397

Two-photon transport in a waveguide coupled to a cavity in a two-level system

NASA Astrophysics Data System (ADS)

We study two-photon effects for a cavity quantum electrodynamics system where a waveguide is coupled to a cavity embedded in a two-level system. The wave function of two-photon scattering is exactly solved by using the Lehmann-Symanzik-Zimmermann reduction. Our results about quantum statistical properties of the outgoing photons explicitly exhibit the photon blockade effects in the strong-coupling regime. These results agree with the observations of recent experiments.

Shi, T.; Fan, Shanhui; Sun, C. P.

2011-12-01

398

Monte Carlo algorithms are developed to calculate the ensemble-average particle leakage through the boundaries of a 2-D binary stochastic material. The mixture is specified within a rectangular area and consists of a fixed number of disks of constant radius randomly embedded in a matrix material. The algorithms are extensions of the proposal of Zimmerman et al., using chord-length sampling to eliminate the need to explicitly model the geometry of the mixture. Two variations are considered. The first algorithm uses Chord-Length Sampling (CLS) for both material regions. The second algorithm employs Limited Chord Length Sampling (LCLS), only using chord-length sampling in the matrix material. Ensemble-average leakage results are computed for a range of material interaction coefficients and compared against benchmark results for both accuracy and efficiency. both algorithms are exact for purely absorbing materials and provide decreasing accuracy as scattering is increased in the matrix material. The LCLS algorithm shows a better accuracy than the CLS algorithm for all cases while maintaining an equivalent or better efficiency. Accuracy and efficiency problems with the CLS algorithm are due principally to assumptions made in determining the chord-length distribution within the disks.

T.J. Donovan; Y. Danon

2002-03-15

399

A standard timing benchmark for EGS4 Monte Carlo calculations.

A Fortran 77 Monte Carlo source code built from the EGS4 Monte Carlo code system has been used for timing benchmark purposes on 29 different computers. This code simulates the deposition of energy from an incident electron beam in a 3-D rectilinear geometry such as one would employ to model electron and photon transport through a series of CT slices. The benchmark forms a standalone system and does not require that the EGS4 system be installed. The Fortran source code may be ported to different architectures by modifying a few lines and only a moderate amount of CPU time is required ranging from about 5 h on PC/386/387 to a few seconds on a massively parallel supercomputer (a BBN TC2000 with 512 processors). PMID:1584121

Bielajew, A F; Rogers, D W

1992-01-01

400

Parallel Finite Element Electron-Photon Transport Analysis on 2-D Unstructured Mesh

A computer code has been developed to solve the linear Boltzmann transport equation on an unstructured mesh of triangles, from a Pro/E model. An arbitriwy arrangement of distinct material regions is allowed. Energy dependence is handled by solving over an arbitrary number of discrete energy groups. Angular de- pendence is treated by Legendre-polynomial expansion of the particle cross sections and a discrete ordinates treatment of the particle fluence. The resulting linear system is solved in parallel with a preconditioned conjugate-gradients method. The solution method is unique, in that the space-angle dependence is solved si- multaneously, eliminating the need for the usual inner iterations. Electron cross sections are obtained from a Goudsrnit-Saunderson modifed version of the CEPXS code. A one-dimensional version of the code has also been develop@ for testing and development purposes.

Drumm, C.R.

1999-01-01

401

Expressions for the transport coefficients obtained from the Gross-Jackson and the Chapman–Enskog methods are used to derive\\u000a explicit relations incorporating the internal energy of the molecules for pure polyatomic gases and for binary mixtures of\\u000a gases. Various coefficients such as the binary diffusion, thermal conductivity, and the viscosity coefficients and the thermal\\u000a diffusion factor are calculated and a comparison with

D. Omeiri; D. E. Djafri

2010-01-01

402

FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.

Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio

2010-04-05

403

NASA Astrophysics Data System (ADS)

For large, highly detailed models, Monte Carlo simulations may spend a large fraction of their run-time performing simple point location and distance to surface calculations for every geometric component in a model. In such cases, the use of bounding boxes (axis-aligned boxes that bound each geometric component) can improve particle tracking efficiency and decrease overall simulation run time significantly. In this paper we present a robust and efficient algorithm for generating the numerically-optimal bounding box (optimal to within a user-specified tolerance) for an arbitrary Constructive Solid Geometry (CSG) object defined by quadratic surfaces. The new algorithm uses an iterative refinement to tighten an initial, conservatively large, bounding box into the numerically-optimal bounding box. At each stage of refinement, the algorithm subdivides the candidate bounding box into smaller boxes, which are classified as inside, outside, or intersecting the boundary of the component. In cases where the algorithm cannot unambiguously classify a box, the box is refined further. This process continues until the refinement near the component's extremal points reach the user-selected tolerance level. This refinement/classification approach is more efficient and practical than methods that rely on computing actual boundary representations or sampling to determine the extent of an arbitrary CSG component. A complete description of the bounding box algorithm is presented, along with a proof that the algorithm is guaranteed to converge to within specified tolerance of the true optimal bounding box. The paper also provides a discussion of practical implementation details for the algorithm as well as numerical results highlighting performance and accuracy for several representative CSG components.

Millman, David L.; Griesheimer, David P.; Nease, Brian R.; Snoeyink, Jack

2014-06-01

404

VERIFICATION OF THE SHIFT MONTE CARLO CODE

Shift is a new hybrid Monte Carlo\\/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a fully-functional parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift s Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark

Nicholas Sly; Mervin Brenden Mervin; Scott W Mosher; Thomas M Evans; G. Ivan Maldonado

2012-01-01

405

NASA Technical Reports Server (NTRS)

In my presentation, I will describe several approximation methods with different level of complexity; they will be gradually applied to simple examples of horizontally inhomogeneous clouds. Understanding of photon horizontal transport and radiative smoothing can help to improve accuracy of the methods The accuracy of the methods will be compared with the full Monte Carlo calculations. The specifics of Monte Carlo in cloudy atmospheres will be also discussed. A special emphasis will be put on the strong forward scattering peak in the phase functions.

Marshak, Alexander

2004-01-01

406

NASA Technical Reports Server (NTRS)

An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

1971-01-01

407

Analysis of single Monte Carlo methods for prediction of reflectance from turbid media

Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-unif