Development and validation of a Monte Carlo simulation of photon transport in an Anger camera
DANIEL J. DE VRIES; STEPHEN C. MOORE; ROBERT E. ZIMMERMAN; STEFAN P. MUELLER; BERNARD FRIEDLAND; RICHARD C. LANZA
1990-01-01
The geometric component of the point spread function (PSF) of a gamma camera collimator can be determined analytically, and the penetration component can be calculated readily by numerical ray-tracing. A Monte Carlo simulation of photon transport which includes collimator scatter is developed. The simulation was implemented with an array processor which propagates up to 1024 photons in parallel, allowing accurate
The 3-D Monte Carlo neutron and photon transport code MCMG and its algorithms
Deng, L.; Hu, Z.; Li, G.; Li, S.; Liu, Z. [Inst. of Applied Physics and Computational Mathematics, Beijing 100094 (China)
2012-07-01
The 3-D Monte Carlo neutron and photon transport parallel code MCMG is developed. A new collision mechanism based on material but not nuclide is added in the code. Geometry cells and surfaces can be dynamically extended. Combination of multigroup and continuous cross-section transport is developed. The multigroup scattering is expansible to P5 and upper scattering is considered. Various multigroup libraries can be easily equipped in the code. The same results with the experiments and the MCNP code are obtained for a series of modes. The speedup of MCMG is a factor of 2-4 relative to the MCNP code in speed. (authors)
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
NASA Astrophysics Data System (ADS)
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
TARTNP: a coupled neutron--photon Monte Carlo transport code. [10- to 20 MeV; in LLL FORTRAN
E. F. Plechaty; J. R. Kimlinger
1976-01-01
A Monte Carlo code was written that calculates the transport of neutrons, photons, and neutron-induced photons. The cross sections of these particles are derived from TARTNP's data base, the Evaluated Nuclear Data Library. The energy range of the neutron data in the Library is 10 MeV to 20 MeV; the photon energy range is 1 keV to 20 MeV. One
MCNP: a general Monte Carlo code for neutron and photon transport
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ''Carlos Haya'', Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
A method for photon beam Monte Carlo multileaf collimator particle transport
NASA Astrophysics Data System (ADS)
Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe
2002-09-01
Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.
ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes
Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M.; Berger, M.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States). Ionizing Radiation Div.
1993-06-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)
2013-07-01
An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
PHOTON TRANSPORT AND SHIELDING
Lin, Zi-wei
PHOTON TRANSPORT AND SHIELDING (DETERMINISTIC OR MC) KEYWORDS: lunar albedo, space radiation, radiation transport ANISOTROPY OF THE ALBEDO RADIATION ENVIRONMENT ON THE LUNAR SURFACE FROM GEANT4 radiation particles on the lunar surface in the 1977 solar minimum galactic- cosmic-ray environment when
Vectorization of Monte Carlo particle transport
Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. (Colorado State Univ., Fort Collins, CO (USA). Computer Center; Los Alamos National Lab., NM (USA); Supercomputing Research Center, Bowie, MD (USA))
1989-01-01
Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC
Iwan Kawrakow; Matthias Fippel
2000-01-01
Several variance reduction techniques, such as photon splitting, electron history repetition, Russian roulette and the use of quasi-random numbers are investigated and shown to significantly improve the efficiency of the recently developed XVMC Monte Carlo code for photon beams in radiation therapy. It is demonstrated that it is possible to further improve the efficiency by optimizing transport parameters such as
Photon transport in binary photonic lattices
NASA Astrophysics Data System (ADS)
Rodríguez-Lara, B. M.; Moya-Cessa, H.
2013-03-01
We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.
Photon transport in binary photonic lattices
B. M. Rodríguez-Lara; H. Moya-Cessa
2013-01-08
We present a review on the mathematical methods used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analysing two types of binary photonic lattices where self-energies or couplings are alternated. For didactic reasons, we split the analysis in classical propagation and quantum transport but all methods can be implemented, mutatis mutandis, in any given case. On the classical side, we use coupled mode theory and present an operator approach to Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by couple mode theory and linear algebra methods involving orthogonal polynomials. Curiously the dynamics of finite size binary lattices can be expressed as roots and functions of Fibonacci polynomials.
Parallel processing Monte Carlo radiation transport codes
McKinney, G.W.
1994-02-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine.
NOTE: An efficient framework for photon Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J.
2007-09-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient. This work was presented in part at the First European Workshop on Monte Carlo Treatment Planning (EWG-MCTP) held in Gent, Belgium from 22 to 25 October 2006.
A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT
Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)
2012-08-20
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
A New Monte Carlo Method for Time-dependent Neutrino Radiation Transport
NASA Astrophysics Data System (ADS)
Abdikamalov, Ernazar; Burrows, Adam; Ott, Christian D.; Löffler, Frank; O'Connor, Evan; Dolence, Joshua C.; Schnetter, Erik
2012-08-01
Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck & Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.
Taro Ueki; J. Eduard Hoogenboom
2001-01-01
An exact perturbation analysis method in Monte Carlo radiation transport calculations is investigated utilizing the coupling of forward and adjoint simulations. The vehicle chosen for this investigation is correlated-coupling for time-independent neutron or photon transport problems, which has been applied to material perturbation isolated from both the source and detector. By initiating forward and adjoint simulation histories (trajectories) in opposite
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Low energy photon dosimetry using Monte Carlo and convolution methods
NASA Astrophysics Data System (ADS)
Modrick, Joseph Michael
Low energy photon dosimetry was investigated using Monte Carlo and convolution methods. Photon energy deposition kernels describing the three dimensional distribution of energy deposition about a primary photon interaction site were computed using EGS4 Monte Carlo. These photon energy deposition kernels were utilized as the convolution kernel in convolution/superposition dose calculations. A Monte Carlo bench mark describing the energy deposition about an isotropic photon point source model was developed. The effect of the inclusion of low energy photon interaction physics on the Monte Carlo and convolution calculations was investigated. A generalized convolution/superposition algorithm was developed to explicitly account for the orientation of the energy deposition kernel for an isotropic photon point source in a brachytherapy geometry. Energy deposition kernels calculations using the EGS4 ``scatter sphere'' code SCASPH were extended to low photon energy. Convolution/superposition dose calculations using these kernels for external beam geometries demonstrated agreement with measurements for low energy diagnostic x-ray beam spectra. The effect of the inclusion of Rayleigh scattering using atomic and molecular coherent scattering form factor data on the kernel calculations was shown to result in an angular distribution of energy deposition consistent with the angular distribution of photon scattering described by the form factor data. Convolution/superposition dose calculations using these kernels did not exhibit any effect of the angular distribution of the kernel. Monte Carlo calculations for an isotropic photon point source including the effects of Rayleigh scatter in a homogeneous medium did not demonstrate any effect of the angular distribution of Rayleigh scattering. Calculations in heterogeneous geometries also did not exhibit any effect of the angular distribution of Rayleigh scattering at low photon energy. Convolution dose calculations using the generalized algorithm demonstrated agreement with the results of the Monte Carlo bench mark. The necessity of applying a correction factor when properly accounting for the orientation of the energy deposition kernel was also demonstrated. The generalized algorithm was also demonstrated to exhibit a discretization artifact from utilizing the discrete energy deposition kernel data.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
Monte Carlo simulation for the transport beamline
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)] [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy)] [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Monte Carlo radiation transport¶llelism
Cox, L. J. (Lawrence J.); Post, S. E. (Susan E.)
2002-01-01
This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.
Low variance methods for Monte Carlo simulation of phonon transport
Péraud, Jean-Philippe M. (Jean-Philippe Michel)
2011-01-01
Computational studies in kinetic transport are of great use in micro and nanotechnologies. In this work, we focus on Monte Carlo methods for phonon transport, intended for studies in microscale heat transfer. After reviewing ...
Fiber transport of spatially entangled photons.
Löffler, W; Euser, T G; Eliel, E R; Scharrer, M; Russell, P St J; Woerdman, J P
2011-06-17
Entanglement in the spatial degrees of freedom of photons is an interesting resource for quantum information. For practical distribution of such entangled photons, it is desirable to use an optical fiber, which in this case has to support multiple transverse modes. Here we report the use of a hollow-core photonic crystal fiber to transport spatially entangled qubits. PMID:21770558
Fiber transport of spatially entangled photons
W. Löffler; T. G. Euser; E. R. Eliel; M. Scharrer; P. St. J. Russell; J. P. Woerdman
2011-06-17
Entanglement in the spatial degrees of freedom of photons is an interesting resource for quantum information. For practical distribution of such entangled photons it is desireable to use an optical fiber, which in this case has to support multiple transverse modes. Here we report the use of a hollow-core photonic crystal fiber to transport spatially entangled qubits.
Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Plantnick, Steven
1999-01-01
The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.
Photon beam description in PEREGRINE for Monte Carlo dose calculations
Cox, L. J., LLNL
1997-03-04
Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.
Monte Carlo benchmark calculations of energy deposition by electron\\/photon showers up to 1 GeV
T. A. Mehlhorn; J. A. Halbleib
1983-01-01
Over the past several years the TIGER series of coupled electron\\/photon Monte Carlo transport codes has been applied to a variety of problems involving nuclear and space radiations, electron accelerators, and radioactive sources. In particular, they have been used at Sandia to simulate the interaction of electron beams, generated by pulsed-power accelerators, with various target materials for weapons effect simulation,
Josep Sempau; Scott J Wilderman; Alex F Bielajew
2000-01-01
A new Monte Carlo (MC) algorithm, the 'dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high- accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an
Photon-electron transport simulation in CANDU reactor channels
NASA Astrophysics Data System (ADS)
Abdelbaky, Mamdooh Elsayed A.
The use of photon-electron transport simulation to calculate the distribution of energy deposition of ionizing radiation in a CANDU reactor channel is investigated. Evaluation of this energy deposition is essential for controlling the radiation-induced corrosion of the channel material. Two general purpose Monte Carlo codes, SANDYL and EGS4, are utilized, together with a newly developed calculation model. The spatial distribution of energy deposition in the reactor channel coolant near the solid/coolant interface is characterized by the existence of a steep gradient in the vicinity of the interface. The best estimate of this gradient is obtained by optimizing the size of the Monte Carlo scoring regions in the coolant adjacent to the interface (i.e. fuel and pressure tube). It is demonstrated that the commonly used channel homogenization method tends to underestimate (about 50%) the amount of energy deposition in the coolant near the fuel. The surface area of the solid, surrounding the coolant, is proved to be an important parameter in predicting the distribution of energy deposition and therefore needs to be simulated in detail, particularly for the outer fuel ring. The distribution of energy deposition is calculated for a number of monoenergetic photon sources, as well as, for a typical reactor photon source distribution. An energy deposition model is introduced to provide an alternative to the extensive Monte Carlo calculations of energy deposition in the CANDU reactor channel. The model is based on decoupling the photon-electron transport simulation using electron-energy-transfer functions. The decoupled model is validated against the EGS4 and SANDYL codes. A significant computational speedup (about seven times) is achieved for energy deposition calculations in the CANDU channel relative to Monte Carlo simulation. Application of the model in calculating the gamma-ray detector response function is also presented.
Fiber transport of spatially entangled photons
NASA Astrophysics Data System (ADS)
Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
Mengkuo Wang
2006-01-01
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the
A generic algorithm for Monte Carlo simulation of proton transport
NASA Astrophysics Data System (ADS)
Salvat, Francesc
2013-12-01
A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
Bal, Guillaume, E-mail: gb2030@columbia.edu [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States); Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Mail Stop 169-237, Pasadena, CA 91109 (United States); Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030 (United States); Langmore, Ian, E-mail: ianlangmore@gmail.com [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States)
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
Photonic sensor applications in transportation security
NASA Astrophysics Data System (ADS)
Krohn, David A.
2007-09-01
There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.
Kushner, Mark
A Monte-Carlo model of xenon resonance radiation transport in a plasma display panel cell algorithms to treat the nonlocal nature of resonance radiation transport. Using results from this model to UV photon transport. The UV radiation produced by resonance transitions of Xe(5p5 6s5s2 5p6 ) and Xe
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
Coupled proton/neutron transport calculations using the S sub N and Monte Carlo methods
Filippone, W.L. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Little, R.C.; Morel, J.E.; MacFarlane, R.E.; Young, P.G. (Los Alamos National Lab., NM (USA))
1991-01-01
Coupled charged/neutral article transport calculations are most often carried out using the Monte Carol technique. For example, the ITS, EGS, and MCNP (Version 4) codes are used extensively for electron/photon transport calculations while HETC models the transport of protons, neutrons and heavy ions. In recent years there has been considerable progress in deterministic models of electron transport, and many of these models are applicable to protons. However, even with these new models (and the well established models for neutron transport) deterministic coupled neutron/proton transport calculations have not been feasible for most problems of interest, due to a lack of coupled multigroup neutron/proton cross section sets. Such cross sections sets are now being developed at Los Alamos. Using these cross sections we have carried out coupled proton/neutron transport calculations using both the S{sub N} and Monte Carlo methods. The S{sub N} calculations used a code called SMARTEPANTS (simulating many accumulative Rutherford trajectories, electron, proton and neutral transport slover) while the Monte Carlo calculations are done with the multigroup option of the MCNP code. Both SMARTEPANTS and MCNP require standard multigroup cross section libraries. HETC on the other hand, avoids the need for precalculated nuclear cross sections by modeling individual nucleon collisions as the transporting neutrons and protons interact with nuclei. 21 refs., 1 fig.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Guillaume Bal; Anthony Davis; Ian Langmore
2011-05-07
A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
Photonic sensor applications in transportation security
David A. Krohn
2007-01-01
There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically
MC++: Parallel, portable, Monte Carlo neutron transport in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A& M Univ., College Station, TX (United States). Dept. of Nuclear Engineering
1997-02-01
We have developed an implicit Monte Carlo neutron transport code in C++ using the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and parallelism. Current capabilities of MC++ are discussed, along with future plans and physics and performance results on many different platforms.
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Efficient, automated Monte Carlo methods for radiation transport
Kong Rong; Ambrose, Martin [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Spanier, Jerome [Claremont Graduate University, 150 E. 10th Street, Claremont, CA 91711 (United States); Beckman Laser Institute and Medical Clinic, University of California, 1002 Health Science Road E., Irvine, CA 92612 (United States)], E-mail: jspanier@uci.edu
2008-11-20
Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k+1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed.
Disorder-Enhanced Transport in Photonic Quasicrystals
NASA Astrophysics Data System (ADS)
Levi, Liad; Rechtsman, Mikael; Freedman, Barak; Schwartz, Tal; Manela, Ofer; Segev, Mordechai
2011-06-01
Quasicrystals are aperiodic structures with rotational symmetries forbidden to conventional periodic crystals; examples of quasicrystals can be found in aluminum alloys, polymers, and even ancient Islamic art. Here, we present direct experimental observation of disorder-enhanced wave transport in quasicrystals, which contrasts directly with the characteristic suppression of transport by disorder. Our experiments are carried out in photonic quasicrystals, where we find that increasing disorder leads to enhanced expansion of the beam propagating through the medium. By further increasing the disorder, we observe that the beam progresses through a regime of diffusive-like transport until it finally transitions to Anderson localization and the suppression of transport. We study this fundamental phenomenon and elucidate its origins by relating it to the basic properties of quasicrystalline media in the presence of disorder.
Thermoelectric transport perpendicular to thin-film heterostructures calculated using the Monte The Monte Carlo technique is used to calculate electrical as well as thermoelectric transport properties ballistic thermionic transport and fully diffusive thermoelectric transport is also described. DOI: 10
Karim Karoui, Mohamed [Faculte des Sciences de Monastir, Avenue de l'environnement 5019 Monastir -Tunisia (Tunisia); Kharrati, Hedi [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia)
2013-07-15
Purpose: This paper presents the results of a series of calculations to determine buildup factors for ordinary concrete, baryte concrete, lead, steel, and iron in broad beam geometry for photons energies from 0.125 to 25.125 MeV at 0.250 MeV intervals.Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials.Results: The computation of the primary broad beams using buildup factors data was done for nine published megavoltage photon beam spectra ranging from 4 to 25 MV in nominal energies, representing linacs made by the three major manufacturers. The first tenth value layer and the equilibrium tenth value layer are calculated from the broad beam transmission for these nine primary megavoltage photon beam spectra.Conclusions: The results, compared with published data, show the ability of these buildup factor data to predict shielding transmission curves for the primary radiation beam. Therefore, the buildup factor data can be combined with primary, scatter, and leakage x-ray spectra to perform computation of broad beam transmission for barriers in radiotherapy shielding x-ray facilities.
Neutron streaming Monte Carlo radiation transport code MORSE-CG
Halley, A.M.; Miller, W.H.
1986-11-01
Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
grmonty: A MONTE CARLO CODE FOR RELATIVISTIC RADIATIVE TRANSPORT
Dolence, Joshua C.; Gammie, Charles F.; Leung, Po Kin [Astronomy Department, University of Illinois, Urbana, IL 61801 (United States); Moscibrodzka, Monika [Physics Department, University of Illinois, Urbana, IL 61801 (United States)], E-mail: dolence2@astro.illinois.edu
2009-10-01
We describe a Monte Carlo radiative transport code intended for calculating spectra of hot, optically thin plasmas in full general relativity. The version we describe here is designed to model hot accretion flows in the Kerr metric and therefore incorporates synchrotron emission and absorption, and Compton scattering. The code can be readily generalized, however, to account for other radiative processes and an arbitrary spacetime. We describe a suite of test problems, and demonstrate the expected N {sup -1/2} convergence rate, where N is the number of Monte Carlo samples. Finally, we illustrate the capabilities of the code with a model calculation, a spectrum of the slowly accreting black hole Sgr A* based on data provided by a numerical general relativistic MHD model of the accreting plasma.
Deterministic photon transport calculations in general geometry for external beam radiation therapy.
Williams, M L; Ilas, D; Sajo, E; Jones, D B; Watkins, K E
2003-12-01
A deterministic method is described for performing three-dimensional (3D) photon transport calculations of a LINAC head and phantom/patient geometry to obtain dose distributions for therapy planning. The space, energy, and directional-dependent photon flux density is obtained by numerically solving the Boltzmann equation in general 3D geometry using the method of characteristics. The deterministic transport calculations use similar ray tracing routines as found in Monte Carlo (MC) codes. A special treatment is developed to better represent the impact of scattering from accelerator head components. Equations are presented for computing the water kerma distribution due to the uncollided and collided photon flux density field in the patient region. Kerma results obtained from the deterministic computation are compared to Monte Carlo values for a variety of source spectra and field sizes. The agreement for kerma values in the beam is usually within the MC uncertainties. It is concluded that the deterministic method is a rigorous, first-principles approach that could provide a superior alternative to Monte Carlo calculations for some types of problems. However additional development is needed to provide capability for 3D electron transport calculations. PMID:14713085
Controlling photon transport in the single-photon weak-coupling regime of cavity optomechanics
NASA Astrophysics Data System (ADS)
Zhang, Wen-Zhao; Cheng, Jiong; Liu, Jing-Yi; Zhou, Ling
2015-06-01
We study the photon statistics properties of few-photon transport in an optomechanical system where an optomechanical cavity couples to two empty cavities. By analytically deriving the one- and two-photon currents in terms of a zero-time-delayed two-order correlation function, we show that a photon blockade can be achieved in both the single-photon strong-coupling regime and the single-photon weak-coupling regime due to the nonlinear interacting and multipath interference. Furthermore, our systems can be applied as a quantum optical diode, a single-photon source, and a quantum optical capacitor. It is shown that this the photon transport controlling devices based on photon antibunching does not require the stringent single-photon strong-coupling condition. Our results provide a promising platform for the coherent manipulation of optomechanics, which has potential applications for quantum information processing and quantum circuit realization.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
NASA Astrophysics Data System (ADS)
Capitani, G. P.; De Sanctis, E.; Di Giacomo, P.; Guaraldo, C.; Lucherini, V.; Polli, E.; Reolon, A. R.; Bellini, V.
1982-12-01
A Monte Carlo computing program has been used to calculate the photon spectrum and emittance from positron annihilation and bremsstrahlung. The positron energy spread, the energy loss and the multiple scattering in the annihilation target have been taken into account. Moreover, the positron emittance, the positron incidence angle and the finite angular acceptance of the photon collimation channel have been explicitly considered. Experimental results, obtained with a pair spectrometer and a beam profile monitor are in good agreement with calculations. In particular, it is shown that the positron emittance has a crucial influence on the shape of the photon spectrum and on the absolute value of the photon flux.
Monte Carlo Neutrino Transport in Post-Merger Disks
NASA Astrophysics Data System (ADS)
Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-04-01
The merger of two neutron stars or a neutron star and a black hole are the prime candidate models for short-duration gamma ray bursts and production of r-process elements. Neutrinos can carry away energy and change the ratio of neutrons to protons, in turn affecting the appearance and dynamics of the burst and the types of elements formed from the outflow. We simulate Monte Carlo transport of neutrinos through the accretion disk surrounding the post-merger black hole or hyper-massive neutron star to explore the influence of neutrinos on the disk composition and temperature profile.
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
Booth, T.E.
1998-06-22
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J. [VTT Technical Research Centre of Finland, Laempoemiehenkuja 3, Espoo, FI-02044 VTT (Finland)
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Monte Carlo discretization of general relativistic radiation transport
Burkhard Zink
2012-12-11
An indirect, hybrid Monte Carlo discretization of general relativistic kinetic theory suitable for the development of numerical schemes for radiation transport is presented. The discretization is based on surface flux estimators obtained from a local decomposition of the distribution function, and can handle optically thick regions by means of formal solutions within each cell. Furthermore, the scheme is designed for parallel implementation, and it admits the use of adaptive techniques by virtue of leaving all probability density functions unspecified. Some considerations for numerical uses of the scheme are discussed.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors
Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors Mona exclusion principle in Monte Carlo simulations. This algorithm has significant advantages to implement the scattering rate. The ensemble Monte Carlo MC simulation is accepted as a powerful numerical technique
Control of single-photon transport in a one-dimensional waveguide by another single photon
Wei-Bin Yan; Heng Fan
2014-08-10
We study the controllable single-photon transport in a one-dimensional (1D) waveguide with nonlinear dispersion relation coupled to a three-level emitter in cascade configuration. An extra cavity field was introduced to drive one of the level transitions of the emitter. In the resonance case, when the extra cavity does not contain photons, the input single photon will be reflected, and when the cavity contains one photon, the full transmission of the input single photon can be obtained. In the off-resonance case, the single-photon transport can also be controlled by the parameters of the cavity. Therefore, we have shown that the single-photon transport can be controlled by an extra cavity field.
Composition PDF/photon Monte Carlo modeling of moderately sooting turbulent jet flames
Mehta, R.S.; Haworth, D.C.; Modest, M.F. [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA 16802 (United States)
2010-05-15
A comprehensive model for luminous turbulent flames is presented. The model features detailed chemistry, radiation and soot models and state-of-the-art closures for turbulence-chemistry interactions and turbulence-radiation interactions. A transported probability density function (PDF) method is used to capture the effects of turbulent fluctuations in composition and temperature. The PDF method is extended to include soot formation. Spectral gas and soot radiation is modeled using a (particle-based) photon Monte Carlo method coupled with the PDF method, thereby capturing both emission and absorption turbulence-radiation interactions. An important element of this work is that the gas-phase chemistry and soot models that have been thoroughly validated across a wide range of laminar flames are used in turbulent flame simulations without modification. Six turbulent jet flames are simulated with Reynolds numbers varying from 6700 to 15,000, two fuel types (pure ethylene, 90% methane-10% ethylene blend) and different oxygen concentrations in the oxidizer stream (from 21% O{sub 2} to 55% O{sub 2}). All simulations are carried out with a single set of physical and numerical parameters (model constants). Uniformly good agreement between measured and computed mean temperatures, mean soot volume fractions and (where available) radiative fluxes is found across all flames. This demonstrates that with the combination of a systematic approach and state-of-the-art physical models and numerical algorithms, it is possible to simulate a broad range of luminous turbulent flames with a single model. (author)
Electron transport in magnetrons by a posteriori Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport
NASA Astrophysics Data System (ADS)
Ryan, B. R.; Dolence, J. C.; Gammie, C. F.
2015-07-01
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.
Scoring methods for implicit Monte Carlo radiation transport
Edwards, A.L.
1981-01-01
Analytical and numerical tests were made of a number of possible methods for scoring the energy exchange between radiation and matter in the implicit Monte Carlo (IMC) radiation transport scheme of Fleck and Cummings. The interactions considered were effective absorption, elastic scattering, and Compton scattering. The scoring methods tested were limited to simple combinations of analogue, linear expected value, and exponential expected value scoring. Only two scoring methods were found that produced the same results as a pure analogue method. These are a combination of exponential expected value absorption and deposition and analogue Compton scattering of the particle, with either linear expected value Compton deposition or analogue Compton deposition. In both methods, the collision distance is based on the total scattering cross section.
Monte Carlo simulation of charge transport in amorphous chalcogenides
NASA Astrophysics Data System (ADS)
Piccinini, E.; Buscemi, F.; Rudan, M.; Brunetti, R.; Jacoboni, C.
2009-11-01
The I(V) characteristics of amorphous GST devices show a peculiar S-shape behavior, that is a swift rise of the current along with a voltage snap-back. This type of characteristics led to a growing research interest in view of the future application of such materials to the manufacturing of phase-change memory devices. In this work we adopt a generalization of the variable-range hopping theory to simulate charge transport in a layer of amorphous Ge2Sb2Te5 sandwiched between two planar metallic electrodes. The numerical implementation of a current-driven Monte Carlo code allows one both to provide a complete microscopic particle picture of electrical conduction in the device and to better analyze the mechanisms governing the snap-back effect.
Uncertainty associated with Monte Carlo radiation transport in radionuclide metrology
NASA Astrophysics Data System (ADS)
Bochud, François O.; Laedermann, Jean-Pascal; Sima, Octavian
2015-06-01
In radionuclide metrology, Monte Carlo (MC) simulation is widely used to compute parameters associated with primary measurements or calibration factors. Although MC methods are used to estimate uncertainties, the uncertainty associated with radiation transport in MC calculations is usually difficult to estimate. Counting statistics is the most obvious component of MC uncertainty and has to be checked carefully, particularly when variance reduction is used. However, in most cases fluctuations associated with counting statistics can be reduced using sufficient computing power. Cross-section data have intrinsic uncertainties that induce correlations when apparently independent codes are compared. Their effect on the uncertainty of the estimated parameter is difficult to determine and varies widely from case to case. Finally, the most significant uncertainty component for radionuclide applications is usually that associated with the detector geometry. Recent 2D and 3D x-ray imaging tools may be utilized, but comparison with experimental data as well as adjustments of parameters are usually inevitable.
bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport
Ryan, Benjamin R; Gammie, Charles F
2015-01-01
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics (GRRMHD) using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.
Pozzi, Sara A [ORNL; Downar, Thomas J [ORNL; Padovani, Enrico [Nuclear Engineering Department Politecnico di Milano, Milan, Italy; Clarke, Shaun D [ORNL
2006-01-01
This work illustrates a methodology based on photon interrogation and coincidence counting for determining the characteristics of fissile material. The feasibility of the proposed methods was demonstrated using a Monte Carlo code system to simulate the full statistics of the neutron and photon field generated by the photon interrogation of fissile and non-fissile materials. Time correlation functions between detectors were simulated for photon beam-on and photon beam-off operation. In the latter case, the correlation signal is obtained via delayed neutrons from photofission, which induce further fission chains in the nuclear material. An analysis methodology was demonstrated based on features selected from the simulated correlation functions and on the use of artificial neural networks. We show that the methodology can reliably differentiate between highly enriched uranium and plutonium. Furthermore, the mass of the material can be determined with a relative error of about 12%. Keywords: MCNP, MCNP-PoliMi, Artificial neural network, Correlation measurement, Photofission
Photon-mediated electron transport in hybrid circuit-QED
Neill Lambert; Christian Flindt; Franco Nori
2013-07-25
We investigate photon-mediated transport processes in a hybrid circuit-QED structure consisting of two double quantum dots coupled to a common microwave cavity. Under suitable resonance conditions, electron transport in one double quantum dot is facilitated by the transport in the other dot via photon-mediated processes through the cavity. We calculate the average current in the quantum dots, the mean cavity photon occupation, and the current cross-correlations with both a full numerical simulation and a recursive perturbation scheme that allows us to include the influence of the cavity order-by-order in the couplings between the cavity and the quantum dot systems. We can then clearly identify the photon-mediated transport processes.
The role of photonics and electronics for Terabit optical transport
William Shieh; A. Al Amin; Qi Yang
2010-01-01
The growth of the internet has incessantly driven the bandwidth demand. In this presentation, we discuss the role of photonics and electronics in the Terabit optical transport poised to emerge within the next decade.
Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations
NASA Astrophysics Data System (ADS)
Malladi, Mayank
Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.
Photon transport in a dissipative chain of nonlinear cavities
NASA Astrophysics Data System (ADS)
Biella, Alberto; Mazza, Leonardo; Carusotto, Iacopo; Rossini, Davide; Fazio, Rosario
2015-05-01
By means of numerical simulations and the input-output formalism, we study photon transport through a chain of coupled nonlinear optical cavities subject to uniform dissipation. Photons are injected from one end of the chain by means of a coherent source. The propagation through the array of cavities is sensitive to the interplay between the photon hopping strength and the local nonlinearity in each cavity. We characterize photon transport by studying the populations and the photon correlations as a function of the cavity position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single-photon and multiphoton resonances directly reflects the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. The numerical algorithm we use, based on the time-evolving block decimation scheme adapted to mixed states, allows us to simulate large arrays (up to 60 cavities). The scaling of photon transmission with the number of cavities does depend on the structure of the many-body photon states inside the array.
SIMIND Monte Carlo simulation of a single photon emission CT
Bahreyni Toossi, M. T.; Islamian, J. Pirayesh; Momennezhad, M.; Ljungberg, M.; Naseri, S. H.
2010-01-01
In this study, we simulated a Siemens E.CAM SPECT system using SIMIND Monte Carlo program to acquire its experimental characterization in terms of energy resolution, sensitivity, spatial resolution and imaging of phantoms using 99mTc. The experimental and simulation data for SPECT imaging was acquired from a point source and Jaszczak phantom. Verification of the simulation was done by comparing two sets of images and related data obtained from the actual and simulated systems. Image quality was assessed by comparing image contrast and resolution. Simulated and measured energy spectra (with or without a collimator) and spatial resolution from point sources in air were compared. The resulted energy spectra present similar peaks for the gamma energy of 99mTc at 140 KeV. FWHM for the simulation calculated to 14.01 KeV and 13.80 KeV for experimental data, corresponding to energy resolution of 10.01 and 9.86% compared to defined 9.9% for both systems, respectively. Sensitivities of the real and virtual gamma cameras were calculated to 85.11 and 85.39 cps/MBq, respectively. The energy spectra of both simulated and real gamma cameras were matched. Images obtained from Jaszczak phantom, experimentally and by simulation, showed similarity in contrast and resolution. SIMIND Monte Carlo could successfully simulate the Siemens E.CAM gamma camera. The results validate the use of the simulated system for further investigation, including modification, planning, and developing a SPECT system to improve the quality of images. PMID:20177569
Berg, Eric; Roncali, Emilie; Cherry, Simon R.
2015-01-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
Resonance fluorescence near a photonic band edge: Dressed-state Monte Carlo wave-function approach
NASA Astrophysics Data System (ADS)
Quang, Tran; John, Sajeev
1997-11-01
We introduce a dressed-state Monte Carlo wave-function technique to describe resonance fluorescence in a broad class of non-Markovian reservoirs with strong atom-reservoir interaction. The method recaptures photon localization effects which are beyond the Born and Markovian approximations, and describes the influence of the driving field on the atom-reservoir interaction. Using this approach, we predict a number of fundamentally new features in resonance fluorescence near the edge of a photonic band gap. In particular, the atomic population exhibits inversion for moderate applied field intensity. For a low external field intensity, the atomic system retains a long-time memory of its initial state.
Robust light transport in non-Hermitian photonic lattices
Stefano Longhi; Davide Gatti; Giuseppe Della Valle
2015-03-30
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport that is rather insensitive to disorder in the structure. The idea of non-Hermitian robust transport is exemplified in the simplest case of an 'imaginary' gauge field for photons using an engineered coupled-resonator optical waveguide (CROW) structure.
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code
Vanessa Panettieri; Maria Amor Duch; Núria Jornet; Mercè Ginjaume; Pablo Carrasco; Andreu Badal; Xavier Ortega; Montserrat Ribas
2007-01-01
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a
Direct photon emission from hadronic sources: Hydrodynamics vs. Transport theory
Bjoern Baeuchle; Marcus Bleicher
2009-03-08
Direct photon emission in heavy-ion collisions is calculated within the relativistic microscopic transport model UrQMD. We compare the results from the pure transport calculation to a hybrid-model calculation, where the high-density part of the evolution is replaced by an ideal 3-dimensional fluiddynamic calculation. The effects of viscosity, present in the transport model but neglected in ideal fluid-dynamics, are examined. We study the contribution of different production channels and non-thermal collisions to the spectrum of direct photons. Detailed comparison to the measurements by the WA~98-collaboration are undertaken.
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport
Wollaeger, Ryan T. [Los Alamos National Laboratory; Densmore, Jeffery D. [Los Alamos National Laboratory
2012-06-19
Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.
Leon E. Smith; Christopher J. Gesh; Richard T. Pagh; Erin A. Miller; Mark W. Shaver; Eric D. Ashbaker; Michael T. Batdorf; J. Edward Ellis; William R. Kaye; Ronald J. McConn; George H. Meriwether; Jennifer J. Ressler; Andrei B. Valsan; Todd A. Wareing
2008-01-01
Simulation is often used to predict the response of gamma-ray spectrometers in technology viability and comparative studies for homeland and national security scenarios. Candidate radiation transport methods generally fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are the most heavily used in the detection community and are particularly effective for calculating pulse-height spectra
Memory Bottlenecks and Memory Contention in Multi-Core Monte Carlo Transport Codes
NASA Astrophysics Data System (ADS)
Tramm, John R.; Siegel, Andrew R.
2014-06-01
We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures.
Peraud, Jean-Philippe Michel
We present a Monte Carlo method for obtaining solutions of the Boltzmann equation to describe phonon transport in micro- and nanoscale devices. The proposed method can resolve arbitrarily small signals (e.g., temperature ...
Photon transport in a dissipative chain of nonlinear cavities
Alberto Biella; Leonardo Mazza; Iacopo Carusotto; Davide Rossini; Rosario Fazio
2015-03-03
We analyze a chain of coupled nonlinear optical cavities driven by a coherent source of light localized at one end and subject to uniform dissipation. We characterize photon transport by studying the populations and the photon correlations as a function of position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single- and multi-photon resonances directly reflect the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. By means of a numerical algorithm based on the time-evolving block decimation scheme adapted to mixed states, we are able to simulate arrays up to sixty cavities.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
A Monte Carlo study of radiation transport through multileaf collimators.
Kim, J O; Siebers, J V; Keall, P J; Arnfield, M R; Mohan, R
2001-12-01
Due to the significant increase in the number of monitor units used to deliver a dynamic IMRT treatment, the total MLC leakage (transmission plus scatter) can exceed 10% of the maximum in-field dose. To avoid dosimetric errors, this leakage must be accurately accounted for in the dose calculation and conversion of optimized intensity patterns to MLC trajectories used for treatment delivery. In this study, we characterized the leaf end transmission and leakage radiation for Varian 80- and 120-leaf MLCs using Monte Carlo simulations. The complex geometry of the MLC, including the rounded leaf end, leaf edges (tongue-and-groove and offset notch), mounting slots, and holes was modeled using MCNP4b. Studies were undertaken to determine the leakage as a function of field size, components of the leakage, electron contamination, beam hardening and leaf tip effects. The leakage radiation with the MLC configured to fully block the field was determined. Dose for 6 and 18 MV beams was calculated at 5 cm depth in a water phantom located at 95 cm SSD, and normalized to the dose for an open field. Dose components were scored separately for radiation transmitted through and scattered from the MLC. For the 80-leaf MLC at 6 MV, the average leakage dose is 1.6%, 1.7%, 1.8%, and 1.9% for 5 x 5, 10 x 10, 15 x 15, and 20 x 20cm2 fields, respectively. For the 120-leaf MLC at 6 MV, the average leakage dose is 1.6%, 1.6%, 1.7%, and 1.9% for the same field sizes. Measured leakage values for the 120-leaf MLC agreed with calculated values to within 0.1% of the open field dose. The increased leakage with field size is attributed to MLC scattered radiation. The fractional electron contamination for a blocked MLC field is greater than that for an open field. The MLC attenuation significantly affects the photon spectrum, resulting in an increase in percent depth dose at 6 MV, however, little effect is observed at 18 MV. Both phantom scatter and the finite source size contribute to the leaf tip profile observed in phantom. The results of this paper can be applied to fluence-to-trajectory and trajectory-to-fluence calculations for IMRT. PMID:11797953
Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan
2014-01-01
We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029
NASA Astrophysics Data System (ADS)
Singh, Vishwanath P.; Medhat, M. E.; Badiger, N. M.
2015-01-01
Geant4 Monte Carlo code simulations were used to solve experimental and theoretical complications for calculation of mass energy-absorption coefficients of elements, air, and compounds. The mass energy-absorption coefficients for nuclear track detectors were computed first time using Geant4 Monte Carlo code for energy 1 keV-20 MeV. Very good agreements for simulated results of mass energy-absorption coefficients for carbon, nitrogen, silicon, sodium iodide and nuclear track detectors were observed on comparison with the values reported in the literatures. Kerma relative to air for energy 1 keV-20 MeV and energy absorption buildup factors for energy 50 keV-10 MeV up to 10 mfp penetration depths of the selected nuclear track detectors were also calculated to evaluate the absorption of the gamma photons. Geant4 simulation can be utilized for estimation of mass energy-absorption coefficients in elements and composite materials.
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
M. Moralles; C. C. Guimarães; E. Okuno
2005-01-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO\\/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the
Taylor, Michael, E-mail: michael.taylor@rmit.edu.au [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia); Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick [School of Applied Sciences, College of Science, Engineering and Health, RMIT University, Melbourne, Victoria (Australia); Physical Sciences, Peter MacCallum Cancer Centre, East Melbourne, Victoria (Australia)
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
Ahmed E. Hassan; John H. Cushman; Jacques W. Delleur
1997-01-01
A Monte Carlo simulation of flow and transport is employed to study tracer migration in porous media with evolving scales of heterogeneity (fractal media). Transport is studied with both conservative and reactive chemicals in media that possess physical as well as chemical heterogeneity. Linear kinetic equations are assumed to relate the sorbed phase and the aqueous phase concentrations. The fluctuating
Highly Confined Photon Transport in Subwavelength Metallic Slot Waveguides
Atwater, Harry
Highly Confined Photon Transport in Subwavelength Metallic Slot Waveguides J. A. Dionne,*, H. J ABSTRACT We report experimental realization of subwavelength slot waveguides that exhibit both micrometer polarization. Here, we propose that metallic slot waveguides (i.e., waveguides consisting of a dielectric core
Photon transport enhanced by transverse Anderson localization in disordered superlattices
NASA Astrophysics Data System (ADS)
Hsieh, P.; Chung, C.; McMillan, J. F.; Tsai, M.; Lu, M.; Panoiu, N. C.; Wong, C. W.
2015-03-01
Controlling the flow of light at subwavelength scales provides access to functionalities such as negative or zero index of refraction, transformation optics, cloaking, metamaterials and slow light, but diffraction effects severely restrict our ability to control light on such scales. Here we report the photon transport and collimation enhanced by transverse Anderson localization in chip-scale dispersion-engineered anisotropic media. We demonstrate a photonic crystal superlattice structure in which diffraction is nearly completely arrested by cascaded resonant tunnelling through transverse guided resonances. By modifying the geometry of more than 4,000 scatterers in the superlattices we add structural disorder controllably and uncover the mechanism of disorder-induced transverse localization. Arrested spatial divergence is captured in the power-law scaling, along with exponential asymmetric mode profiles and enhanced collimation bandwidths for increasing disorder. With increasing disorder, we observe the crossover from cascaded guided resonances into the transverse localization regime, beyond both the ballistic and diffusive transport of photons.
Photon mediated transport and crystallization in optically driven Rydberg gases
NASA Astrophysics Data System (ADS)
Otterbach, Johannes; Lauer, Achim; Muth, Dominik; Fleischhauer, Michael
2012-06-01
We show that excitations in a gas of atoms driven to Rydberg states by near-resonant laser radiation in a two-photon coupling scheme experience a photon mediated transport. Thus even if the center-of-mass motion of the atoms can be neglected, this results in a kinetic Hamiltonian for the Rydberg excitations. The corresponding mass is identical to that of the dark-state polaritons of the optical coupling scheme. The kinetic energy competes with the Rydberg dipole-dipole interactions and can prevent the formation of quasi-crystal structures. Using DMRG simulations we calculate the Luttinger parameter for a one-dimensional gas of resonantly driven Rydberg atoms taking into account the photon mediated transport and derive conditions under which quasi-crystallization can be observed.
NASA Astrophysics Data System (ADS)
Palta, Jatinder Raj
A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.
Hogan, Robin
A Sensitivity Study of the Effect of Horizontal Photon Transport on the Radiative Forcing by which 3D photon transport can change the radiative effect of clouds, which can be rather difficult photon transport on the radiative forcing of clouds other than contrails has been studied in detail
NASA Astrophysics Data System (ADS)
Wang, Mengkuo
In particle transport computations, the Monte Carlo simulation method is a widely used algorithm. There are several Monte Carlo codes available that perform particle transport simulations. However the geometry packages and geometric modeling capability of Monte Carlo codes are limited as they can not handle complicated geometries made up of complex surfaces. Previous research exists that take advantage of the modeling capabilities of CAD software. The two major approaches are the Converter approach and the CAD engine based approach. By carefully analyzing the strategies and algorithms of these two approaches, the CAD engine based approach has peen identified as the more promising approach. Though currently the performance of this approach is not satisfactory, there is room for improvement. The development and implementation of an improved CAD based approach is the focus of this thesis. Algorithms to accelerate the CAD engine based approach are studied. The major acceleration algorithm is the Oriented Bounding Box algorithm, which is used in computer graphics. The difference in application between computer graphics and particle transport has been considered and the algorithm has been modified for particle transport. The major work of this thesis has been the development of the MCNPX/CGM code and the testing, benchmarking and implementation of the acceleration algorithms. MCNPX is a Monte Carlo code and CGM is a CAD geometry engine. A facet representation of the geometry provided the least slowdown of the Monte Carlo code. The CAD model generates the facet representation. The Oriented Bounding Box algorithm was the fastest acceleration technique adopted for this work. The slowdown of the MCNPX/CGM to MCNPX was reduced to a factor of 3 when the facet model is used. MCNPX/CGM has been successfully validated against test problems in medical physics and a fusion energy device. MCNPX/CGM gives exactly the same results as the standard MCNPX when an MCNPX geometry model is available. For the case of the complicated fusion device---the stellerator, the MCNPX/CGM's results closely match a one-dimension model calculation performed by ARIES team.
Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.
2013-01-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
Boas, David A.
2010-01-01
We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging. PMID:19997242
Input-Output Formalism for Few-Photon Transport: A Systematic Treatment Beyond Two Photons
Shanshan Xu; Shanhui Fan
2015-02-21
We provide a systematic treatment of $N$-photon transport in a waveguide coupled to a local system, using the input-output formalism. The main result of the paper is a general connection between the $N$-photon S matrix and the Green functions of the local system. We also show that the computation can be significantly simplified, by exploiting the connectedness structure of both the S matrix and the Green function, and by computing the Green function using an effective Hamiltonian that involves only the degrees of freedom of the local system. We illustrate our formalism by computing $N$-photon transport through a cavity containing a medium with Kerr nonlinearity, with $N$ up to 3.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, CHUQ Pavillon L'Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Department of Oncology, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)
2011-03-15
Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.
NASA Astrophysics Data System (ADS)
Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Rasmussen, A. P.; Peng, E.; Ahmad, Z.; Bankert, J.; Chang, C.; Claver, C.; Gilmore, D. K.; Grace, E.; Hannel, M.; Hodge, M.; Lorenz, S.; Lupu, A.; Meert, A.; Nagarajan, S.; Todd, N.; Winans, A.; Young, M.
2015-05-01
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s?1, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.
Carlo Jacoboni; Lino Reggiani
1983-01-01
This review presents in a comprehensive and tutorial form the basic principles of the Monte Carlo method, as applied to the solution of transport problems in semiconductors. Sufficient details of a typical Monte Carlo simulation have been given to allow the interested reader to create his own Monte Carlo program, and the method has been briefly compared with alternative theoretical
Filippone, W.L.; Baker, R.S. [Arizona Univ., Tucson, AZ (United States)
1990-12-31
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by themselves. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S{sub N} region. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S{sub N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
Photonic transport control by spin-optical disordered metasurface
Veksler, Dekel; Ozeri, Dror; Shitrit, Nir; Kleiner, Vladimir; Hasman, Erez
2014-01-01
Photonic metasurfaces are ultrathin electromagnetic wave-molding metamaterials providing the missing link for the integration of nanophotonic chips with nanoelectronic circuits. An extra twist in this field originates from spin-optical metasurfaces providing the photon spin (polarization helicity) as an additional degree of freedom in light-matter interactions at the nanoscale. Here we report on a generic concept to control the photonic transport by disordered (random) metasurfaces with a custom-tailored geometric phase. This approach combines the peculiarity of random patterns to support extraordinary information capacity within the intrinsic limit of speckle noise, and the optical spin control in the geometric phase mechanism, simply implemented in two-dimensional structured matter. By manipulating the local orientations of anisotropic optical nanoantennas, we observe spin-dependent near-field and free-space open channels, generating state-of-the-art multiplexing and interconnects. Spin-optical disordered m...
Monte Carlo simulation and measurements of clinical photon beams using LiF:Mg,Cu,P+PTFE
C. Azorín-Vega; T. Rivera-Montalvo; J. Azorín-Nieto; L. Villaseñor-Navarro; P. Luján-Castilla; H. Vega-Carrillo
2010-01-01
Thermoluminescent response of LiF:Mg,Cu,P+PTFE under clinical photon irradiation was obtained. Thermoluminescent dosimeters (TLDs) were irradiated for determining entrance surface dose (ESD) in a solid water phantom when using standard clinical adult treatment protocols. A Monte Carlo simulation of photon interaction with matter was performed and absorbed dose determined. ESD calculated by MCNPX code was greater than those determined by direct
Inverse Monte Carlo: a unified reconstruction algorithm for SPECT
Carey E. Floyd; R. E. Coleman; R. J. Jaszczak
1985-01-01
Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a
LDRD project 151362 : low energy electron-photon transport.
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
Eugene D.. Brooks; Michael Scott McKinley; Frank Daffin; Abraham Szoeke
2005-01-01
The equations of radiation transport for thermal photons are notoriously difficult to solve in thick media without resorting to asymptotic approximations such as the diffusion limit. One source of this difficulty is that in thick, absorbing media, thermal emission and absorption are almost completely balanced. A new formulation for thermal radiation transport, called the difference formulation, was recently introduced in
Densmore, Jeffery D., E-mail: jdd@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Thompson, Kelly G., E-mail: kgt@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Urbatsch, Todd J., E-mail: tmonster@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States)
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Monte Carlo-based revised values of dose rate constants at discrete photon energies
Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju
2014-01-01
Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30–50 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20–50 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166
Bishop, Martin J; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon "packets" as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct "humped" morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with "virtual-electrode" regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
shield — universal Monte Carlo hadron transport code: scope and applications
A. V Dementyev; N. M Sobolevsky
1999-01-01
shield is a transport code for simulation of hadron cascades in complex extended targets of arbitrary geometric configuration and chemical composition in the energy range up to 1 TeV. Transport of nucleons, pions, kaons, antinucleons, and muons is considered. Recently the transfer of ions (arbitrary A,Z-nuclei) was included. Hadron–nucleus and nucleus–nucleus interactions inside the target are simulated in exclusive approach
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M. [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 1616 Genova (Italy)
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Alex Doronin; Igor Meglinski
2011-01-01
The advantages of using method Monte Carlo for simulation of radiative transfer in complex turbid random media like biological tissues are well recognized. However, in most practical applications the wave nature of probing optical radiation is ignored, and its propagation is considered in terms of neutral particles, so-called photon packets. Nevertheless, when the interference, polarization or coherent effects of scattering
Monte Carlo particle simulation and finite-element techniques for tandem mirror transport
Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.
1985-12-01
A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. 11 refs.
BioMOCA—a Boltzmann transport Monte Carlo model for ion channel simulation
T. A. van der Straaten; G. Kathawala; A. Trellakis; R. S. Eisenberg; U. Ravaioli
2005-01-01
With the recent availability of high-resolution structural information for several key ion channel proteins and large-scale computational resources, Molecular Dynamics has become an increasingly popular tool for ion channel simulation. However, the CPU requirements for simulating ion transport on time scales relevant to conduction still exceed the resources presently available. To address this problem, we have developed Biology Monte Carlo
Improvements to the Integrated TIGER Series Monte Carlo radiation transport codes
L. Montgomery Smith; Reuben D. Hochstedler
1997-01-01
This paper describes two areas in which the usability of the Integrated TIGER Series (ITS) Monte Carlo radiation transport codes has been improved for use in the design and analysis of tests conducted with the DECADE nuclear weapons effects simulator. The first area is in improving the speed of execution. By benchmarking and profiling the member codes of the ITS
Marco Saraniti; Stephen M. Goodnick
2000-01-01
We present a fullband cellular automaton (CA) code for simulation of electron and hole transport in Si and GaAs. In this implementation, the entire Brillouin zone is discretized using a nonuniform mesh in k-space, and a transition table is generated between all initial and final states on the mesh, greatly simplifying the final state selection of the conventional Monte Carlo
Monte Carlo study of conservative transport in heterogeneous dual-porosity media
Hai Huang; Ahmed E. Hassan; Bill X. Hu
2003-01-01
In this study, a Monte Carlo simulation method is applied to study groundwater flow and solute transport in heterogeneous, dual-porosity media. Both the hydraulic conductivity and the interregional mass diffusion rate are assumed to be spatial random variables, and their random distributions are generated through a Fast Fourier Transform (FFT) technique. A block-centered finite difference (FD) method is used to
Monte Carlo study of charge transport in slantingly coupled arrays of small tunnel junctions
Yoshinao Mizugaki; Hiroshi Shimada
2005-01-01
We present a numerical investigation on the charge transport in capacitively coupled arrays of small tunnel junctions in the case of slanted coupling. Current-voltage characteristics of the arrays having various coupling capacitance (Cc) are simulated using the Monte Carlo method. In contrast to the case of straight coupled arrays, slantingly coupled arrays exhibit the current mirror effect using a coupling
Seif, F.; Bayatiani, M. R.
2015-01-01
Background Megavoltage beams used in radiotherapy are contaminated with secondary electrons. Different parts of linac head and air above patient act as a source of this contamination. This contamination can increase damage to skin and subcutaneous tissue during radiotherapy. Monte Carlo simulation is an accurate method for dose calculation in medical dosimetry and has an important role in optimization of linac head materials. The aim of this study was to calculate electron contamination of Varian linac. Materials and Method The 6MV photon beam of Varian (2100 C/D) linac was simulated by Monte Carlo code, MCNPX, based on its company’s instructions. The validation was done by comparing the calculated depth dose and profiles of simulation with dosimetry measurements in a water phantom (error less than 2%). The Percentage Depth Dose (PDDs), profiles and contamination electron energy spectrum were calculated for different therapeutic field sizes (5×5 to 40×40 cm2) for both linacs. Results The dose of electron contamination was observed to rise with increase in field size. The contribution of the secondary contamination electrons on the surface dose was 6% for 5×5 cm2 to 27% for 40×40 cm2, respectively. Conclusion Based on the results, the effect of electron contamination on patient surface dose cannot be ignored, so the knowledge of the electron contamination is important in clinical dosimetry. It must be calculated for each machine and considered in Treatment Planning Systems. PMID:25973409
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
Ballinger, C.T.; Rathkopf, J.A. (Lawrence Livermore National Lab., CA (USA)); Martin, W.R. (Michigan Univ., Ann Arbor, MI (USA). Dept. of Nuclear Engineering)
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
A virtual photon energy fluence model for Monte Carlo dose calculation.
Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan
2003-03-01
The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields. PMID:12674229
Transport level in disordered organics: An analytic model and Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Nikitenko, V. R.; Strikhanov, M. N.
2014-02-01
Transport level concept is known as a promising tool which provides great simplification in analytic description of hopping transport in organics. However, quantitative modeling of mobility and diffusion coefficient by the use of this concept is extremely rare up to the moment. Monte-Carlo modeling of transport level and related quantities in the framework of Gaussian disorder model is carried out in this work. Methodology of this modeling is discussed and physical essence of various approaches to transport level is clarified. It is shown that an analytic model, which considers the transport level as the average energy of states from which a carrier can be released by means of energetically upward and downward jumps with equal probability, is applicable for quantitative modeling of temperature dependence of mobility and coefficient of field-stimulated diffusion. Simple analytic expressions for these transport coefficients are obtained.
NASA Astrophysics Data System (ADS)
Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.
2014-06-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.
Transport in open spin chains: A Monte Carlo wave-function approach
Mathias Michel; Ortwin Hess; Hannu Wichterich; Jochen Gemmer
2008-03-07
We investigate energy transport in several two-level atom or spin-1/2 models by a direct coupling to heat baths of different temperatures. The analysis is carried out on the basis of a recently derived quantum master equation which describes the nonequilibrium properties of internally weakly coupled systems appropriately. For the computation of the stationary state of the dynamical equations, we employ a Monte Carlo wave-function approach. The analysis directly indicates normal diffusive or ballistic transport in finite models and hints toward an extrapolation of the transport behavior of infinite models.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Monte Carlo study of conservative transport in heterogeneous dual-porosity media
NASA Astrophysics Data System (ADS)
Huang, Hai; Hassan, Ahmed E.; Hu, Bill X.
2003-05-01
In this study, a Monte Carlo simulation method is applied to study groundwater flow and solute transport in heterogeneous, dual-porosity media. Both the hydraulic conductivity and the interregional mass diffusion rate are assumed to be spatial random variables, and their random distributions are generated through a Fast Fourier Transform (FFT) technique. A block-centered finite difference (FD) method is used to solve the flow equation. Based on the generated flow fields, a random walk particle-tracking algorithm is invoked to study the solute transport. The mass diffusion between the mobile and immobile water regions is simulated by a two-state, homogeneous, continuous-time Markov chain. The Monte Carlo simulation results are compared to those obtained through the first-order, Eulerian perturbation method. It is shown from the comparison that the first-order analytical method is robust for predicting mean concentration in mild heterogeneous dual-porosity media. However, large deviations are observed between the analytical and Monte Carlo results for predicting transport in moderately-highly heterogeneous media. The Monte Carlo method is also used to study the variance of the solute flux through a control plane.
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045
NASA Astrophysics Data System (ADS)
Bouchard, Hugo; Bielajew, Alex
2015-07-01
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano’s theorem. Additionally, Lewis’ approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano’s and Lewis’ approaches are stated in this new equation. Fano’s theorem is found not to apply in the presence of electromagnetic fields. Lewis’ theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
NASA Astrophysics Data System (ADS)
Sarria, D.; Blelly, P.-L.; Forme, F.
2015-05-01
Terrestrial gamma ray flashes are natural bursts of X and gamma rays, correlated to thunderstorms, that are likely to be produced at an altitude of about 10 to 20 km. After the emission, the flux of gamma rays is filtered and altered by the atmosphere and a small part of it may be detected by a satellite on low Earth orbit (RHESSI or Fermi, for example). Thus, only a residual part of the initial burst can be measured and most of the flux is made of scattered primary photons and of secondary emitted electrons, positrons, and photons. Trying to get information on the initial flux from the measurement is a very complex inverse problem, which can only be tackled by the use of a numerical model solving the transport of these high-energy particles. For this purpose, we developed a numerical Monte Carlo model which solves the transport in the atmosphere of both relativistic electrons/positrons and X/gamma rays. It makes it possible to track the photons, electrons, and positrons in the whole Earth environment (considering the atmosphere and the magnetic field) to get information on what affects the transport of the particles from the source region to the altitude of the satellite. We first present the MC-PEPTITA model, and then we validate it by comparison with a benchmark GEANT4 simulation with similar settings. Then, we show the results of a simulation close to Fermi event number 091214 in order to discuss some important properties of the photons and electrons/positrons that are reaching satellite altitude.
Time series analysis of Monte Carlo neutron transport calculations
NASA Astrophysics Data System (ADS)
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.
Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi [Department of Radiological Technology, Kumamoto University School of Health Sciences, 4-24-1, Kuhonji, Kumamoto, 862-0976 (Japan); Division of Radiation Oncology, Tokyo Women's Medical University Hospital, Tokyo, 162-8666 (Japan); Department of Central Radiology, Kinki University Hospital, Osaka, 589-8511 (Japan); Department of Central Radiology, Shinshu University Hospital, Matsumoto, 390-8621 (Japan)
2009-07-15
The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k{sub pl} for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c{sub pl} and a chamber-dependent fluence correction factor h{sub pl} for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h{sub pl} values for the plane-parallel chambers were evaluated from the electron fluence correction factor {phi}{sub pl}{sup w} and wall correction factors P{sub wall,w} and P{sub wall,pl} for a combination of water or plastic materials. The calculated k{sub pl} and h{sub pl} values were verified by comparison with the measured values. A set of k{sub pl} values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k{sub pl} values also agreed within their combined uncertainty with the measured data. For electron beams, the c{sub pl} values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The {phi}{sub pl}{sup w} values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P{sub wall} between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h{sub pl} values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h{sub pl} values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport
Lynch, J.E.
1985-01-01
Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
penMesh - Monte Carlo Radiation Transport Simulation in a Triangle Mesh Geometry
Andreu Badal; Iacovos S. Kyprianou; Diem Phuc Banh; Aldo Badano; Josep Sempau
2009-01-01
We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes-such as PENELOPE's quadric geometry-impose some limitations in the shape of the objects that can be simulated, triangle meshes
Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
Time-implicit Monte-Carlo collision algorithm for particle-in-cell electron transport models
Cranfill, C.W.; Brackbill, J.U.; Goldman, S.R.
1985-01-01
A time-implicit Monte-Carlo collision algorithm has been developed to allow particle-in-cell electron transport models to be applied to arbitrarily collisional systems. The algorithm is formulated for electrons moving in response to electric and magnetic accelerations and subject to collisional drag and scattering due to a background plasma. The correct fluid or streaming transport results are obtained in the respective limits of strongly- or weakly-collisional systems, and reasonable behavior is produced even for time steps greatly exceeding the magnetic-gyration and collisional-scattering times.
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
The role of plasma evolution and photon transport in optimizing future advanced lithography sources
Harilal, S. S.
The role of plasma evolution and photon transport in optimizing future advanced lithography sources) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, Daryoush
1999-12-01
BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.
A Macro-Monte Carlo method for the simulation of diffuse light transport in tissue
Finlay, Jarod C.; Zhu, Timothy C
2015-01-01
The Monte Carlo (MC) method of calculating light distributions in turbid media such as tissue has become the gold standard, especially in complex geometries and heterogeneous tissue. The utility of the MC method, however, is limited by is computational intensity. In an effort to reduce the time needed for MC calculations, we have adapted a macro-Monte Carlo (MMC) method (Neuenschwander, et al. 1995, Phys. Med. Biol. 40, 543-574) to the solution of tissue optics problems. Traditional MC routines trace individual photons step-by-step through the tissue. Instead, the MMC approach relies on a data set consisting of spheres in which the light absorbed in each voxel is pre-calculated using a traditional MC routine. At each MMC step, the pre-calculated absorbed light dose in the appropriate sphere, aligned to the current position and direction of the sphere, is recorded in the dose matrix. The position and direction of the photon exiting the sphere are chosen from the exit distribution of the pre-calculated sphere, and the process is repeated. By choosing the size of the pre-calculated sphere appropriately, arbitrarily complex boundary geometries can be simulated. We compare the accuracy and calculation time of the MMC method with a traditional MC algorithm for a variety of tissue optical properties and geometries. We find that the MMC algorithm can increase the speed of calculation by as much as two orders of magnitude, depending on the optical properties being simulated, without a significant loss in accuracy.
Light transport and lasing in complex photonic structures
NASA Astrophysics Data System (ADS)
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.
Wright, Tracy; Lye, Jessica E; Ramanathan, Ganesan; Harty, Peter D; Oliver, Chris; Webb, David V; Butler, Duncan J
2015-01-21
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a (60)Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method. PMID:25565406
NASA Astrophysics Data System (ADS)
Wright, Tracy; Lye, Jessica E.; Ramanathan, Ganesan; Harty, Peter D.; Oliver, Chris; Webb, David V.; Butler, Duncan J.
2015-01-01
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4–1.1% across the range of calibration energies compared to the current calibration method.
Monte Carlo Method for Electron Transport Simulation in SF6-CO2 Gas Mixtures
NASA Astrophysics Data System (ADS)
Xiao, Deng-Ming; Xu, Xin; Yang, Jing-lin
2004-02-01
The Monte Carlo method is used for the simulation of the electron transport of SF6-CO2 gas mixtures in a uniform electric field. The electron swarm behavior of SF6-CO2 gas mixtures is calculated and analyzed over the E/N range of 272.83-364.51 Td (1 Td = 10-21 V\\cdotm2) and compared with the experimental results. The result of Monte Carlo simulation shows that the present set of cross sections of SF6 and CO2 revised according to the experimental results gives the values of swarm parameters such as ionization and electron attachment coefficients, drift velocity and longitudinal diffusion coefficient which are in excellent agreement with the respective measurement results for the relatively wide range of E/N.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E. [Department of Radiation Oncology and Department of Molecular Imaging Program at Stanford, Stanford University, Stanford, California 94305 (United States)
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study
Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon [Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744, Korea and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of) and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of) and Department of Intelligent Convergence Systems, Seoul National University, Seoul, 151-742 (Korea, Republic of)
2012-03-15
Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
Photon dose kernels dataset for nuclear medicine dosimetry, using the GATE Monte Carlo toolkit
Panagiotis Papadimitroulas; George Loudos; Panagiotis Georgoulias; George C. Kagadis
2011-01-01
Photon dose point kernels (DPKs) were generated using the GATE toolkit for different media and for radionuclides of interest in nuclear medicine. In the present work the primary photon contribution of different isotopes in different media is calculated, since this dataset is not available in the literature according to our knowledge. The generated dataset consists of photon DPKs for some
Jacques, Steven L.
2014-01-01
The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426
Jacques, Steven L
2014-12-01
The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
NASA Astrophysics Data System (ADS)
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-01
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1×1018 cm3 in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Lateral electron transport in monolayers of short chains at interfaces: A Monte Carlo study.
George, Christopher B. [Northwestern Univ., Evanston, IL (United States); Szleifer, Igal [Northwestern Univ., Evanston, IL (United States); Ratner, Mark A. [Northwestern Univ., Evanston, IL (United States)
2010-10-01
Using Monte Carlo simulations, we study lateral electronic diffusion in dense monolayers composed of a mixture of redox-active and redox-passive chains tethered to a surface. Two charge transport mechanisms are considered: the physical diffusion of electroactive chains and electron hopping between redox-active sites. Results indicate that by varying the monolayer density, the mole fraction of electroactive chains, and the electron hopping range, the dominant charge transport mechanism can be changed. For high density monolayers in a semi-crystalline phase, electron diffusion proceeds via electron hopping almost exclusively, leading to static percolation behavior. In fluid monolayers, the diffusion of chains may contribute more to the overall electronic diffusion, reducing the observed static percolation effects.
Dynamic Monte-Carlo modeling of hydrogen isotope reactive diffusive transport in porous graphite
NASA Astrophysics Data System (ADS)
Schneider, R.; Rai, A.; Mutzke, A.; Warrier, M.; Salonen, E.; Nordlund, K.
2007-08-01
An equal mixture of deuterium and tritium will be the fuel used in a fusion reactor. It is important to study the recycling and mixing of these hydrogen isotopes in graphite from several points of view: (i) impact on the ratio of deuterium to tritium in a reactor, (ii) continued use of graphite as a first wall and divertor material, and (iii) reaction with carbon atoms and the transport of hydrocarbons will provide insight into chemical erosion. Dynamic Monte-Carlo techniques are used to study the reactive-diffusive transport of hydrogen isotopes and interstitial carbon atoms in a 3-D porous graphite structure irradiated with hydrogen and deuterium and is compared with published experimental results for hydrogen re-emission and isotope exchange.
Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers
Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-01-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...
Comparison of generalized transport and Monte-Carlo models of the escape of a minor species
NASA Technical Reports Server (NTRS)
Demars, H. G.; Barakat, A. R.; Schunk, R. W.
1993-01-01
The steady-state diffusion of a minor species through a static background species is studied using a Monte Carlo model and a generalized 16-moment transport model. The two models are in excellent agreement in the collision-dominated region and in the 'transition region'. In the 'collisionless' region the 16-moment solution contains two singularities, and physical meaning cannot be assigned to the solution in their vicinity. In all regions, agreement between the models is best for the distribution function and for the lower-order moments and is less good for higher-order moments. Moments of order higher than the heat flow and hence beyond the level of description provided by the transport model have a noticeable effect on the shape of distribution functions in the collisionless region.
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C. [Centre for Diploma Program, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka (Malaysia); You, A. H.; Wong, E. K. [Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka (Malaysia)
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
The Implementation of Photon Polarization into the Mercury Transport Code
Windsor, Ethan
2014-06-04
direction, and a polarization ellipticity. These new variables are tracked throughout each particle’s history. They impact and are impacted by interactions with the medium. The determination of how these variables affect the photon’s interactions...
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Physics and Ontario Cancer Institute, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.
Jabbari, Keyvan; Keall, Paul; Seuntjens, Jan
2009-02-01
The purpose of this work is to revisit the impediments and characteristics of fast Monte Carlo techniques for applications in radiation therapy treatment planning using new methods of utilizing pregenerated electron tracks. The limitations of various techniques for the improvement of speed and accuracy of electron transport have been evaluated. A method is proposed that takes advantage of large available memory in current computer hardware for extensive generation of precalculated data. Primary tracks of electrons are generated in the middle of homogeneous materials (water, air, bone, lung) and with energies between 0.2 and 18 MeV using the EGSnrc code. Secondary electrons are not transported, but their position, energy, charge, and direction are saved and used as a primary particle. Based on medium type and incident electron energy, a track is selected from the precalculated set. The performance of the method is tested in various homogeneous and heterogeneous configurations and the results were generally within 2% compared to EGSnrc but with a 40-60 times speed improvement. In a second stage the authors studied the obstacles for further increased speed-ups in voxel geometries by including ray-tracing and particle fluence information in the pregenerated track information. The latter method leads to speed increases of about a factor of 500 over EGSnrc for voxel-based geometries. In both approaches, no physical calculation is carried out during the runtime phase after the pregenerated data has been stored even in the presence of heterogeneities. The precalculated data are generated for each particular material and this improves the performance of the precalculated Monte Carlo code both in terms of accuracy and speed. Precalculated Monte Carlo codes are accurate, fast, and physics independent and therefore applicable to different radiation types including heavy-charged particles. PMID:19291992
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.
NASA Astrophysics Data System (ADS)
Hassan, Ahmed E.; Cushman, John H.; Delleur, Jacques W.
1997-11-01
A Monte Carlo simulation of flow and transport is employed to study tracer migration in porous media with evolving scales of heterogeneity (fractal media). Transport is studied with both conservative and reactive chemicals in media that possess physical as well as chemical heterogeneity. Linear kinetic equations are assumed to relate the sorbed phase and the aqueous phase concentrations. The fluctuating log conductivity possesses the power law spectrum of a fractional Brownian motion (fBm). Chemical heterogeneity is represented as spatially varying reaction rates that also are assumed to obey fBm statistics and may be correlated to the conductivity field. The model is based on a finite difference approximation to the flow problem and a random walk particle-tracking approach for solving the solute transport equation. The model is used to make comparisons with the nonlocal transport equations recently developed by Deng et al. [1993], and Hu et al. [1995, 1997]. The results presented herein support these nonlocal models for a wide range of heterogeneous systems. However, the infinite integral scale associated with the fractal conductivity has a significant effect on the prediction of the nonlocal theories. This suggests that integral scale should play a role in stochastic Eulerian perturbation theories. The importance of the local-scale dispersion depends to a great extent on the magnitude of the local dispersivities. The effect of neglecting local dispersion decreases with the decrease in local dispersivity.
Multi-layer diffusion approximation for photon transport in biological tissue
Hollmann, Joseph
2009-06-02
MULTI-LAYER DIFFUSION APPROXIMATION FOR PHOTON TRANSPORT IN BIOLOGICAL TISSUE A Thesis by JOSEPH HOLLMANN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE August 2007 Major Subject: Biomedical Engineering MULTI-LAYER DIFFUSION APPROXIMATION FOR PHOTON TRANSPORT IN BIOLOGICAL TISSUE A Thesis by JOSEPH HOLLMANN Submitted to the Office...
Babich, L. P., E-mail: babich@elph.vniief.ru; Donskoy, E. N.; Kutsyk, I. M. [All-Russian Research Institute of Experimental Physics, Russian Federal Nuclear Center (Russian Federation)
2008-07-15
Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.
Schach Von Wittenau, Alexis E. (Livermore, CA)
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Igor V. Meglinsky; Stephen J. Matcher
1999-01-01
The present paper is concerned with the simulation, by random sampling, of the multiple scattering of photons for the purpose of solving near-IR radiation in complex multi- layer highly scattering media, which represent the structure of human skin in a simplistic manner. Direct weight Monte Carlo algorithm is imitating transport of photons between source detector areas by letting the photons
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I., E-mail: icarbone@ucsc.edu; Carter, S. A. [University of California, Santa Cruz, California 95060 (United States); Zimanyi, G. T. [University of California, Davis, California 95616 (United States)
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport
NASA Astrophysics Data System (ADS)
Manalo, K.; Chin, M.; Sjoden, G.
2014-06-01
Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.
Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald [ISDE, Vanderbilt University, 1025 16th Avenue South, Nashville, TN 37212 (United States)
2011-06-01
MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.
Transport of Neutrons and Photons Through Iron and Water Layers
Michal Kostál; Frantisek Cvachovec; Bohumil Osmera; Wolfgang Hansen; Klaus Noack
2009-01-01
The neutron and photon spectra were measured after iron and water plates placed at the horizontal channel of the Dresden University reactor AK-2. The measurements have been performed with the multiparameter spectrometer [1] with a stilbene cylindrical crystal, 10 × 10 mm or 45 × 45 mm; the neutron and photon spectra have been measured simultaneously. The calculations were performed
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2008-01-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator?detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ?10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552
Photon storage and transport by surface acoustic waves in III-V quantum wells
Harris Turk
2002-01-01
Photon storage and transport devices use the interaction between a SAW (surface acoustic wave) and a quantum well to confine and transport optically excited carriers within the quantum well. The principal interaction is between the piezoelectric field of the SAW and ionized excitons, whose charges segregate into parallel lines of charge that move synchronously with the field. The optically excited
Two-photon fluorescence correlation microscopy reveals the two-phase nature of transport in tumors
George Alexandrakis; Edward B Brown; Ricky T Tong; Trevor D McKee; Robert B Campbell; Yves Boucher; Rakesh K Jain
2004-01-01
Transport parameters determine the access of drugs to tumors. However, technical difficulties preclude the measurement of these parameters deep inside living tissues. To this end, we adapted and further optimized two-photon fluorescence correlation microscopy (TPFCM) for in vivo measurement of transport parameters in tumors. TPFCM extends the detectable range of diffusion coefficients in tumors by one order of magnitude, and
F. Capasso; T. P. Pearsall; K. K. Thornber
1981-01-01
Some important consequences of the uncertainty principle on Monte Carlo simulations of very high field transport are discussed. It is shown that recent values of the phonon scattering rates reported for GaAs by Shichijo and Hess lead to an unrealistically high collisional broadening (0.3-0.6eV) of the electronic states, thus rendering questionable any attempt to relate transport properties to the band
Radiation-induced "zero-resistance state" and the photon-assisted transport.
Shi, Junren; Xie, X C
2003-08-22
We demonstrate that the radiation-induced "zero-resistance state" observed in a two-dimensional electron gas is a result of the nontrivial structure of the density of states of the systems and the photon-assisted transport. A toy model of a quantum tunneling junction with oscillatory density of states in leads catches most of the important features of the experiments. We present a generalized Kubo-Greenwood conductivity formula for the photon-assisted transport in a general system and show essentially the same nature of the transport anomaly in a uniform system. PMID:14525265
S. R. Hanna; A. G. Russell; J. G. Wilkinson; J. Vukovich; D. A. Hansen
2005-01-01
A Monte Carlo (MC) probabilistic approach is used to estimate uncertainties in the emissions outputs of the Biogenics Emission Inventory System Version 3 (BEIS3) model and subsequent ozone outputs of three Chemical Transport Models (CTMs) due to uncertainties in many key BEIS3 biogenics emissions model parameters and inputs. BEIS3 was developed by the Environmental Protection Agency to estimate emissions of
Kevin M. Warren; Andrew L. Sternberg; Robert A. Weller; Mark P. Baze; Lloyd W. Massengill; Robert A. Reed; Marcus H. Mendenhall; Ronald D. Schrimpf
2008-01-01
Monte-Carlo radiation transport code is coupled with SPICE circuit level simulation to identify regions of single event upset vulnerability in an SEU hardened flip-flop, as well as predict single event upset cross sections and on-orbit soft error rates under static and dynamic operating conditions.
J. A. Jr. Fleck; E. H. Canfield
1984-01-01
An unconditionally stable Monte Carlo method for solving the frequency dependent equations of nonlinear radiation transport has been described previously. One of the central features of this method is the replacement of a portion of the absorption and reemission of radiation by a scattering process. While the inclusion of this scattering process assures the accuracy and stability of solutions regardless
NASA Astrophysics Data System (ADS)
Shi, X.; Ye, M.; Curtis, G. P.; Lu, D.; Meyer, P. D.; Yabusaki, S.; Wu, J.
2011-12-01
Assessment of parametric uncertainty for groundwater reactive transport models is challenging, because the models are highly nonlinear with respect to their parameters due to nonlinear reaction equations and process coupling. The nonlinearity may yield parameter distributions that are non-Gaussian and have multiple modes. For such parameter distributions, the widely used nonlinear regression methods may not be able to accurately quantify predictive uncertainty. One solution to this problem is to use Markov Chain Monte Carlo (MCMC) techniques. Both the nonlinear regression and MCMC methods are used in this study for quantification of parametric uncertainty of a surface complexation model (SCM), developed to simulate hexavalent uranium [U(VI)] transport in column experiments. Firstly, a brute force Monte Carlo (MC) simulation with hundreds of thousands of model executions is conducted to understand the surface of objective function and predictive uncertainty of uranium concentration. Subsequently, the Gauss-Marquardt-Levenberg method is applied to calibrate the model. It shows that, even with multiple initial guesses, the local optimization method has difficulty of finding the global optimum because of the rough surface of the objective function and local optima/minima due to model nonlinearity. Another problem of the nonlinear regression is the underestimation of predictive uncertainty, as both the linear and nonlinear confidence intervals are narrower than that obtained from the native MC simulation. Since the naïve MC simulation is computationally expensive, the above challenges for parameter estimation and predictive uncertainty analysis are addressed using a computationally efficient MCMC technique, the DiffeRential Evolution Adaptive Metropolis algorithm (DREAM) algorithm. The results obtained from running DREAM compared with those from brute force Monte Carlo simulations shown that MCMC not only successfully infers the multi-modals posterior probability distribution, but also can provide good estimates of predictive uncertainty. The reason for the poor performance of the nonlinear regression methods is that Gaussian marginal distributions assumed in the nonlinear regression deviate significantly from the marginal posterior probability distributions estimated by DREAM and the brute force MC simulations.
Daskalov, G.M.; Baker, R.S.; Little, R.C.; Rogers, D.W.O.; Williamson, J.F.
2000-02-01
The DANTSYS discrete ordinates computer code system is applied to quantitative estimation of water kerma rate distributions in the vicinity of discrete photon sources with energies in the 20- to 800-keV range in two-dimensional cylindrical r-z geometry. Unencapsulated sources immersed in cylindrical water phantoms of 40-cm diameter and 40-cm height are modeled in either homogeneous phantoms or shielded by Ti, Fe, and Pb filters with thicknesses of 1 and 2 mean free paths. The obtained dose results are compared with corresponding photon Monte Carlo simulations. A 210-group photon cross-section library for applications in this energy range is developed and applied, together with a general-purpose 42-group library developed at Los Alamos National Laboratory, for DANTSYS calculations. The accuracy of DANTSYS with the 42-group library relative to Monte Carlo exhibits large pointwise fluctuations from {minus}42 to +84%. The major cause for the observed discrepancies is determined to be the inadequacy of the weighting function used for the 42-group library derivation. DANTSYS simulations with a finer 210-group library show excellent accuracy on and off the source transverse plane relative to Monte Carlo kerma calculations, varying from {minus}4.9 to 3.7%. The P{sub 3} Legendre polynomial expansion of the angular scattering function is shown to be sufficient for accurate calculations. The results demonstrate that DANTSYS is capable of calculating photon doses in very good agreement with Monte Carlo and that the multigroup cross-section library and efficient techniques for mitigation of ray effects are critical for accurate discrete ordinates implementation.
Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Nanophotonic Waveguide
Zeyang Liao; Xiaodong Zeng; Shi-Yao Zhu; M. Suhail Zubairy
2015-05-25
We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach $100\\%$. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physical phenomena can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single photon frequency filter, single photon modulation and may find important applications in quantum information.
Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
2015-05-01
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300-900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
NASA Astrophysics Data System (ADS)
Mei, S.; Maurer, L. N.; Aksamija, Z.; Knezevic, I.
2014-10-01
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 ?m, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Shi, C. Y.; Xu, X. George; Stabin, Michael G. [Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas 78229 (United States); Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Room 1-11, NES Building, Tibbits Avenue, Troy, New York 12180 (United States); Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37232-2675 (United States)
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
Event-by-event Monte Carlo simulation of radiation transport in vapor and liquid water
NASA Astrophysics Data System (ADS)
Papamichael, Georgios Ioannis
A Monte-Carlo Simulation is presented for Radiation Transport in water. This process is of utmost importance, having applications in oncology and therapy of cancer, in protecting people and the environment, waste management, radiation chemistry and on some solid-state detectors. It's also a phenomenon of interest in microelectronics on satellites in orbit that are subject to the solar radiation and in space-craft design for deep-space missions receiving background radiation. The interaction of charged particles with the medium is primarily due to their electromagnetic field. Three types of interaction events are considered: Elastic scattering, impact excitation and impact ionization. Secondary particles (electrons) can be generated by ionization. At each stage, along with the primary particle we explicitly follow all secondary electrons (and subsequent generations). Theoretical, semi-empirical and experimental formulae with suitable corrections have been used in each case to model the cross sections governing the quantum mechanical process of interactions, thus determining stochastically the energy and direction of outgoing particles following an event. Monte-Carlo sampling techniques have been applied to accurate probability distribution functions describing the primary particle track and all secondary particle-medium interaction. A simple account of the simulation code and a critical exposition of its underlying assumptions (often missing in the relevant literature) are also presented with reference to the model cross sections. Model predictions are in good agreement with existing computational data and experimental results. By relying heavily on a theoretical formulation, instead of merely fitting data, it is hoped that the model will be of value in a wider range of applications. Possible future directions that are the object of further research are pointed out.
NASA Astrophysics Data System (ADS)
Dolguikh, Maxim V.
Monte Carlo method for the simulation of hole dynamics in degenerate valence subbands of cubic semiconductors is developed. All possible intra- and inter-subband scattering rates are theoretically calculated for Ge, Si, and GaAs. A far-infrared laser concept based on intersubband transitions of holes in p-type periodically delta-doped semiconductor films is studied using numerical Monte-Carlo simulation of hot hole dynamics. The considered device consists of monocrystalline pure Ge layers periodically interleaved with delta-doped layers and operates with vertical or in-plane hole transport in the presence of a perpendicular in-plane magnetic field. Inversion population on intersubband transitions arises due to light hole accumulation in E ? B fields, as in the bulk p-Ge laser. However, the considered structure achieves spatial separation of hole accumulation regions from the doped layers, which reduces ionized-impurity and carrier-carrier scattering for the majority of light holes. This allows remarkable increase of the gain in comparison with bulk p-Ge lasers. Population inversion and gain sufficient for laser operation are expected up to 77 K. Test structures grown by chemical vapor deposition demonstrate feasibility of producing the device with sufficient active thickness to allow quasioptical electrodynamic cavity solutions. The same device structure is considered in GaAs. The case of Si is much more complicated due to strong anisotropy of the valence band. The primary new result for Si is the first consideration of the anisotropy of optical phonon scattering for hot holes.
Keith M. Stantz; Bo Liu; Robert A. Kruger
2007-01-01
Purpose: The purpose of this study is to evaluate the influence of photon propagation on the NIR spectral features associated with photoacoustic imaging. Introduction: Photoacoustic CT spectroscopy (PCT-S) has the potential to identify molecular properties of tumors while overcoming the limited depth resolution associated with optical imaging modalities (e.g., OCT and DOT). Photoacoustics is based on the fact that biological
GPU-accelerated object-oriented Monte Carlo modeling of photon migration in turbid media
Alex Doronin; Igor Meglinski
2010-01-01
Due to the recent intense developments in lasers and optical technologies a number of novel revolutionary imaging and photonic-based diagnostic modalities have arisen. Utilizing various features of light these techniques provide new practical solutions in a range of biomedical, environmental and industrial applications. Conceptual engineering design of new optical diagnostic systems requires a clear understanding of the light-tissue interaction and
A new high speed solution for the evaluation of Monte Carlo radiation transport computations
Alexander S. Pasciak; John R. Ford
2006-01-01
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the Monte Carlo technique useful for calculations requiring a near real-time evaluation period. For Monte Carlo simulations, a small computational unit called a Field Programmable Gate Array (FPGA)
Jet transport and photon bremsstrahlung via longitudinal and transverse scattering
Guang-You Qin; Abhijit Majumder
2015-04-27
We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.
Jet transport and photon bremsstrahlung via longitudinal and transverse scattering
NASA Astrophysics Data System (ADS)
Qin, Guang-You; Majumder, Abhijit
2015-04-01
We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.
Habib, B; Poumarede, B; Tola, F; Barthe, J
2010-01-01
The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258
Wulff, J; Heverhagen, J T; Zink, K
2008-06-01
This paper presents a detailed investigation into the calculation of perturbation and beam quality correction factors for ionization chambers in high-energy photon beams with the use of Monte Carlo simulations. For a model of the NE2571 Farmer-type chamber, all separate perturbation factors as found in the current dosimetry protocols were calculated in a fixed order and compared to the currently available data. Furthermore, the NE2571 Farmer-type and a model of the PTW31010 thimble chamber were used to calculate the beam quality correction factor kQ. The calculations of kQ showed good agreement with the published values in the current dosimetry protocols AAPM TG-51 and IAEA TRS-398 and a large set of published measurements. Still, some of the single calculated perturbation factors deviate from the commonly used ones; especially prepl deviates more than 0.5%. The influence of various sources of uncertainties in the simulations is investigated for the NE2571 model. The influence of constructive details of the chamber stem shows a negligible dependence on calculated values. A comparison between a full linear accelerator source and a simple collimated point source with linear accelerator photon spectra yields comparable results. As expected, the calculation of the overall beam quality correction factor is sensitive to the mean ionization energy of graphite used. The measurement setup (source-surface distance versus source-axis distance) had no influence on the calculated values. PMID:18460747
NASA Astrophysics Data System (ADS)
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-07-01
A new concept for the design of flattening filters applied in the generation of 6 and 15 MV photon beams by clinical linear accelerators is evaluated by Monte Carlo simulation. The beam head of the Siemens Primus accelerator has been taken as the starting point for the study of the conceived beam head modifications. The direction-selective filter (DSF) system developed in this work is midway between the classical flattening filter (FF) by which homogeneous transversal dose profiles have been established, and the flattening filter-free (FFF) design, by which advantages such as increased dose rate and reduced production of leakage photons and photoneutrons per Gy in the irradiated region have been achieved, whereas dose profile flatness was abandoned. The DSF concept is based on the selective attenuation of bremsstrahlung photons depending on their direction of emission from the bremsstrahlung target, accomplished by means of newly designed small conical filters arranged close to the target. This results in the capture of large-angle scattered Compton photons from the filter in the primary collimator. Beam flatness has been obtained up to any field cross section which does not exceed a circle of 15 cm diameter at 100 cm focal distance, such as 10 × 10 cm2, 4 × 14.5 cm2 or less. This flatness offers simplicity of dosimetric verifications, online controls and plausibility estimates of the dose to the target volume. The concept can be utilized when the application of small- and medium-sized homogeneous fields is sufficient, e.g. in the treatment of prostate, brain, salivary gland, larynx and pharynx as well as pediatric tumors and for cranial or extracranial stereotactic treatments. Significant dose rate enhancement has been achieved compared with the FF system, with enhancement factors 1.67 (DSF) and 2.08 (FFF) for 6 MV, and 2.54 (DSF) and 3.96 (FFF) for 15 MV. Shortening the delivery time per fraction matters with regard to workflow in a radiotherapy department, patient comfort, reduction of errors due to patient movement and a slight, probably just noticable improvement of the treatment outcome due to radiobiological reasons. In comparison with the FF system, the number of head leakage photons per Gy in the irradiated region has been reduced at 15 MV by factors 1/2.54 (DSF) and 1/3.96 (FFF), and the source strength of photoneutrons was reduced by factors 1/2.81 (DSF) and 1/3.49 (FFF).
A Monte-Carlo Model of Neutral-Particle Transport in Diverted Plasmas
NASA Astrophysics Data System (ADS)
Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.
1982-05-01
The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall-reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudocollision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum-transfer rates, energy-transfer rates, and wall-sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.
Enhancing quantum transport in a photonic network using controllable decoherence
Devon N. Biggerstaff; René Heilmann; Aidan A. Zecevik; Markus Gräfe; Matthew A. Broome; Alessandro Fedrizzi; Stefan Nolte; Alexander Szameit; Andrew G. White; Ivan Kassal
2015-04-23
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of quantum transport can be enhanced through dynamic interaction between the system and a noisy environment. We report the first experimental demonstration of such environment-assisted quantum transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable decoherence is simulated via broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence.
NASA Astrophysics Data System (ADS)
Medhat, M. E.
2015-02-01
The main goal of this work is focused on testing the applicability of Geant4 electromagnetic models for studying mass attenuations for different types of composite materials at 59.5, 80, 356, 661.6, 1173.2 and 1332.5 keV photon energies. The simulated results of mass attenuation coefficients were compared with the experimental and theoretical data for the same samples and a good agreement has been observed. The results indicate that this process can be followed to determine the data on the attenuation of gamma-rays with the several energies in different materials.
Franco Venturi; R. Kent Smith; ENRICO C. SANGIORGI; Mark R. Pinto; Bruno Riccò
1989-01-01
An efficient self-consistent device simulator coupling Poisson equation and Monte Carlo transport suitable for general silicon devices, including those with regions of high doping\\/carrier densities, is discussed. Key features include an original iteration scheme and an almost complete vectorization of the program. The simulator has been used to characterize nonequilibrium effects in deep submicron nMOSFETs. Substantial overshoot effects are noticeable
Massimo V. Fischetti; Steven E. Laux
1988-01-01
The physics of electron transport in Si and GaAs is investigated with use of a Monte Carlo technique which improves the ``state-of-the-art'' treatment of high-energy carrier dynamics. (1) The semiconductor is modeled beyond the effective-mass approximation by using the band structure obtained from empirical-pseudopotential calculations. (2) The electron-phonon, electron-impurity, and electron-electron scattering rates are computed in a way consistent with
Kinetic Monte Carlo Model of Charge Transport in Hematite (?-Fe2O3)
Kerisit, Sebastien N.; Rosso, Kevin M.
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron-transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parameterized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., ~2% of iron(III) sites, whereas the presence of repulsive defects has a linear effect on the electron polaron diffusion. Attractive defects are found to significantly affect electron polaron diffusion at low polaron to defect ratios due to trapping on nanosecond to microsecond time scales. This work indicates that electrons can diffuse away from the initial site of interfacial electron transfer at a rate that is consistent with measured electrical conductivities but that the presence of certain kinds of defects will severely limit the mobility of donated electrons.
Dissipation in few-photon waveguide transport [Invited] Eden Rephaeli1,
Fan, Shanhui
.2500) Fluctuations, relaxations, and noise; (270.4180) Multiphoton processes; (270.5580) Quantum electrodynamics [2]. Alternatively in the Schrödinger pic- ture one can use the stochastic approach of the Monte to treat two-photon transport [8,18], a key step is to insert a complete set of basis states for
Landon, Colin Donald
2014-01-01
We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are ...
Pasciak, Alexander Samuel
2009-05-15
There are two principal techniques for performing Monte Carlo electron transport computations. The first, and least common, is the full track-structure method. This method individually models all physical electron interactions ...
Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Direct photon emission in Heavy Ion Collisions from Microscopic Transport Theory and Fluid Dynamics
Bjoern Baeuchle; Marcus Bleicher
2010-03-29
Direct photon emission in heavy-ion collisions is calculated within a relativistic micro+macro hybrid model and compared to the microscopic transport model UrQMD. In the hybrid approach, the high-density part of the collision is calculated by an ideal 3+1-dimensional hydrodynamic calculation, while the early (pre-equilibrium-) and late (rescattering-) phase are calculated with the transport model. Different scenarios of the transition from the macroscopic description to the transport model description and their effects are studied. The calculations are compared to measurements by the WA98-collaboration and predictions for the future CBM-experiment are made.
Efficient transportation of nano-sized particles along slotted photonic crystal waveguide.
Lin, Pin-Tso; Lee, Po-Tsung
2012-01-30
We design a slotted photonic crystal waveguide (S-PhCW) and numerically propose that it can efficiently transport polystyrene particle with diameter as small as 50 nm in a 100 nm slot. Excellent optical confinement and slow light effect provided by the photonic crystal structure greatly enhance the optical force exerted on the particle. The S-PhCW can thus transport the particle with optical propulsion force as strong as 5.3 pN/W, which is over 10 times stronger than that generated by the slotted strip waveguide (S-SW). In addition, the vertical optical attraction force induced in the S-PhCW is over 2 times stronger than that of the S-SW. Therefore, the S-PhCW transports particles not only efficiently but also stably. We anticipate this waveguide structure will be beneficial for the future lab-on-chip development. PMID:22330556
Julio F. Almansa; Rafael Guerrero; Feras M. O. Al-Dweri; Marta Anguiano; Antonio M. Lallena
2007-01-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10keV and 2MeV. A comparison of the results obtained using the two codes with the available data calculated with other
Julio F. Almansa; Rafael Guerrero; Feras M. O. Al-Dweri; Marta Anguiano; Antonio M. Lallena
2007-01-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated
Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
NASA Astrophysics Data System (ADS)
Roncali, Emilie; Schmall, Jeffrey P.; Viswanath, Varsha; Berg, Eric; Cherry, Simon R.
2014-04-01
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm3 crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface.
Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R
2014-04-21
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727
Wang, Lilie L. W.; Beddar, Sam [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2011-03-15
Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.
Górka, B; Nilsson, B; Fernández-Varea, J M; Svensson, R; Brahme, A
2006-08-01
A new dosimeter, based on chemical vapour deposited (CVD) diamond as the active detector material, is being developed for dosimetry in radiotherapeutic beams. CVD-diamond is a very interesting material, since its atomic composition is close to that of human tissue and in principle it can be designed to introduce negligible perturbations to the radiation field and the dose distribution in the phantom due to its small size. However, non-tissue-equivalent structural components, such as electrodes, wires and encapsulation, need to be carefully selected as they may induce severe fluence perturbation and angular dependence, resulting in erroneous dose readings. By introducing metallic electrodes on the diamond crystals, interface phenomena between high- and low-atomic-number materials are created. Depending on the direction of the radiation field, an increased or decreased detector signal may be obtained. The small dimensions of the CVD-diamond layer and electrodes (around 100 microm and smaller) imply a higher sensitivity to the lack of charged-particle equilibrium and may cause severe interface phenomena. In the present study, we investigate the variation of energy deposition in the diamond detector for different photon-beam qualities, electrode materials and geometric configurations using the Monte Carlo code PENELOPE. The prototype detector was produced from a 50 microm thick CVD-diamond layer with 0.2 microm thick silver electrodes on both sides. The mean absorbed dose to the detector's active volume was modified in the presence of the electrodes by 1.7%, 2.1%, 1.5%, 0.6% and 0.9% for 1.25 MeV monoenergetic photons, a complete (i.e. shielded) (60)Co photon source spectrum and 6, 18 and 50 MV bremsstrahlung spectra, respectively. The shift in mean absorbed dose increases with increasing atomic number and thickness of the electrodes, and diminishes with increasing thickness of the diamond layer. From a dosimetric point of view, graphite would be an almost perfect electrode material. This study shows that, for the considered therapeutic beam qualities, the perturbation of the detector signal due to charge-collecting graphite electrodes of thicknesses between 0.1 and 700 microm is negligible within the calculation uncertainty of 0.2%. PMID:16861769
Mazurier, J; Gouriou, J; Chauvenet, B; Barthe, J
2001-06-01
The BNM-LNHB (formerly BNM-LPRI, the French national standard laboratory for ionizing radiation) is equipped with a SATURNE 43 linear accelerator (GE Medical Systems) dedicated to establishing national references of absorbed dose to water for high-energy photon and electron beams. These standards are derived from a dose measurement with a graphite calorimeter and a transfer procedure to water using Fricke dosimeters. This method has already been used to obtain the reference of absorbed dose to water for cobalt-60 beams. The correction factors rising from the perturbations generated by the dosimeters were determined by Monte Carlo calculations. To meet these applications, the Monte Carlo code PENELOPE was used and user codes were specially developed. The first step consisted of simulating the electron and photon showers produced by primary electrons within the accelerator head to determine the characteristics of the resulting photon beams and absorbed dose distributions in a water phantom. These preliminary computations were described in a previous paper. The second step, described in this paper, deals with the calculation of the perturbation correction factors of the graphite calorimeter and of Fricke dosimeters. To point out possible systematic biases, these correction factors were calculated with another Monte Carlo code, EGS4, widely used for years in the field of dose metrology applications. Comparison of the results showed no significant bias. When they were possible, experimental verifications confirmed the calculated values. PMID:11419629
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam
Atarod, Maryam; Shokrani, Parvaneh
2013-01-01
Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP ha
Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2014-11-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381
Franke, B. C. [Sandia National Laboratories, Albuquerque, NM 87185 (United States); Prinja, A. K. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, NM 87131 (United States)
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
Quantum transport of strongly interacting photons in a one-dimensional nonlinear waveguide
Mohammad Hafezi; Darrick Chang; Vladimir Gritsev; Eugene Demler; Mikhail Lukin
2009-11-25
We present a theoretical technique for solving the quantum transport problem of a few photons through a one-dimensional, strongly nonlinear waveguide. We specifically consider the situation where the evolution of the optical field is governed by the quantum nonlinear Schr\\"odinger equation (NLSE). Although this kind of nonlinearity is quite general, we focus on a realistic implementation involving cold atoms loaded in a hollow-core optical fiber, where the atomic system provides a tunable nonlinearity that can be large even at a single-photon level. In particular, we show that when the interaction between photons is effectively repulsive, the transmission of multi-photon components of the field is suppressed. This leads to anti-bunching of the transmitted light and indicates that the system acts as a single-photon switch. On the other hand, in the case of attractive interaction, the system can exhibit either anti-bunching or bunching, which is in stark contrast to semiclassical calculations. We show that the bunching behavior is related to the resonant excitation of bound states of photons inside the system.
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.
Pasciak, Alexander Samuel
2007-04-25
useful for calculations requiring a near real-time evaluation period. For Monte Carlo simulations, a small computational unit called a Field Programmable Gate Array (FPGA) is capable of bringing the power of a large cluster computer into any personal...
Hughes, S
2011-01-01
The input/output characteristics of coherent photon transport through a semiconductor cavity system containing a single quantum dot is presented. The nonlinear quantum optics formalism uses a master equation approach and focuses on a waveguide-cavity system containing a semiconductor quantum dot; our general technique also applies to studying coherent reflection from a micropillar cavity. We investigate the effects of light propagation and show the need for quantized multiphoton effects for various dot-cavity systems, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down---even in the weak coupling regime. As a measure of the photon correlations, we compute the Fano factor and the error associated with making a semiclassical approximation. We also investigate the role of electron--acoustic-phonon scattering and show that phonon-mediated scatt...
Watson, Craig A.
FA-119 On-Farm Transport of Ornamental Fish 1 Tina C. Crosby, Jeffrey E. Hill, Carlos V. Martinez and transport of fish will affect survival and overall quality of the fish (see UF IFAS Circular 919 Stress-It's Role in Fish Disease). Fish should be moved quickly and efficiently to minimize stress, the risk
Yuni K. Dewaraja; Michael Ljungberg; Amitava Majumdar; Abhijit Bose; Kenneth F. Koral
2002-01-01
This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random
Wulff, Jörg; Heverhagen, Johannes T; Karle, Heiko; Zink, Klemens
2010-01-01
Current dosimetry protocols require geometrical reference conditions for the determination of absorbed dose in external radiotherapy. Whenever these geometrical conditions cannot be maintained the application of additional corrections becomes necessary, in principle. The current DIN6800-2 protocol includes a corresponding factor k(NR), but numerical values are lacking and no definite information about the magnitude of this correction is available yet. This study presents Monte-Carlo based calculations within the 6 MV-X photon field of a linear accelerator for a common used ion chamber (PTW31010) employing the EGSnrc code system. The linear accelerator model was matched to measurements, showing good agreement and is used as a realistic source. The individual perturbation correction factors as well as the resulting correction factor k(NR) were calculated as a function of depth for three field sizes, as a function of central axis distance for the largest field and within the build-up region. The behaviour of the ion chamber was further investigated for an idealized hypothetical field boundary. Within the field of the linear accelerator where charged particle equilibrium is achieved the factor k(NR) was generally below approximately 0.5%. In the build-up region a depth dependent correction of up to 2% was calculated when positioning the chamber according to DIN6800-2. Minimizing the depth dependence of the corrections in the build-up region lead to a slightly different positioning of the ion chamber as currently recommended. In regions of the hypothetical field boundary with missing charged particle equilibrium and high dose gradients, the ion chamber response changed by up to approximately 40%, caused by the comparatively large volume (0.125 cm(3)) of the investigated chamber. PMID:20211423
Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney
2014-06-01
Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342
K. C. Wu; K. F. Seefeldt; M. J. Solomon; J. W. Halloran
2005-01-01
A general, quantitative relationship between the photon-transport mean free path (l*) and resin sensitivity (DP) in multiple-scattering alumina\\/monomer suspensions formulated for ceramic stereolithography is presented and experimentally demonstrated. A Mie-theory-based computational method with structure factor contributions to determine l* was developed. Planar-source diffuse transmittance experiments were performed on monodisperse and bimodal polystyrene\\/water and alumina\\/monomer systems to validate this computational tool.
NASA Astrophysics Data System (ADS)
Misawa, Taichi; Okanaga, Takuya; Mohamad, Aizuddin; Sakai, Tadashi; Awano, Yuji
2015-05-01
We developed a novel Monte Carlo simulation model to investigate the line width dependence of the transport properties of multi-layered graphene nanoribbon (GNR) interconnects with edge roughness. We reported that the line width dependence of carrier mobility decreases significantly as the magnitude of the edge roughness gets smaller, which agrees well with experiments. We also discussed the influence of the inelasticity of edge roughness scatterings, inter-layer tunneling, and line width dependent band structures on the line width of the GNR interconnects.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Chow, J [Princess Margaret Cancer Center, Toronto, ON, (Canada); Owrangi, A [University of Michigan Health System, Ann Arbor, MI (United States)
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
ITS Version 4.0: Electron/photon Monte Carlo transport codes
Halbleib, J.A,; Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)
1995-07-01
The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.
NASA Astrophysics Data System (ADS)
Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
NASA Astrophysics Data System (ADS)
Ishmael Parsai, E.; Pearson, David; Kvale, Thomas
2007-08-01
An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Bal, Guillaume
with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved Sam- pling; Variance Reduction; 3D Rendering; Remote Sensing Department of Applied Physics (figure of merit) . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 Variance reduction
Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI
2002-01-01
For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray
NASA Astrophysics Data System (ADS)
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S.; Harrendorf, Marco A.; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-08-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
NASA Astrophysics Data System (ADS)
Almansa, Julio F.; Guerrero, Rafael; Al-Dweri, Feras M. O.; Anguiano, Marta; Lallena, Antonio M.
2007-05-01
Monte Carlo calculations using the codes PENELOPE and GEANT4 have been performed to characterize the dosimetric properties of monoenergetic photon point sources in water. The dose rate in water has been calculated for energies of interest in brachytherapy, ranging between 10 keV and 2 MeV. A comparison of the results obtained using the two codes with the available data calculated with other Monte Carlo codes is carried out. A ?2-like statistical test is proposed for these comparisons. PENELOPE and GEANT4 show a reasonable agreement for all energies analyzed and distances to the source larger than 1 cm. Significant differences are found at distances from the source up to 1 cm. A similar situation occurs between PENELOPE and EGS4.
Program EPICP: Electron photon interaction code, photon test module. Version 94.2
Cullen, D.E.
1994-09-01
The computer code EPICP performs Monte Carlo photon transport calculations in a simple one zone cylindrical detector. Results include deposition within the detector, transmission, reflection and lateral leakage from the detector, as well as events and energy deposition as a function of the depth into the detector. EPICP is part of the EPIC (Electron Photon Interaction Code) system. EPICP is designed to perform both normal transport calculations and diagnostic calculations involving only photons, with the objective of developing optimum algorithms for later use in EPIC. The EPIC system includes other modules that are designed to develop optimum algorithms for later use in EPIC; this includes electron and positron transport (EPICE), neutron transport (EPICN), charged particle transport (EPICC), geometry (EPICG), source sampling (EPICS). This is a modular system that once optimized can be linked together to consider a wide variety of particles, geometries, sources, etc. By design EPICP only considers photon transport. In particular it does not consider electron transport so that later EPICP and EPICE can be used to quantitatively evaluate the importance of electron transport when starting from photon sources. In this report I will merely mention where we expect the results to significantly differ from those obtained considering only photon transport from that obtained using coupled electron-photon transport.
NASA Astrophysics Data System (ADS)
Stephani, K. A.; Goldstein, D. B.; Varghese, P. L.
2012-07-01
A general approach for achieving consistency in the transport properties between direct simulation Monte Carlo (DSMC) and Navier-Stokes (CFD) solvers is presented for five-species air. Coefficients of species diffusion, viscosity, and thermal conductivities are considered. The transport coefficients that are modeled in CFD solvers are often obtained by expressions involving sets of collision integrals, which are obtained from more realistic intermolecular potentials (i.e., ab initio calculations). In this work, the self-consistent effective binary diffusion and Gupta et al.-Yos tranport models are considered. The DSMC transport coefficients are approximated from Chapman-Enskog theory in which the collision integrals are computed using either the variable hard sphere (VHS) and variable soft sphere (VSS) (phenomenological) collision cross section models. The VHS and VSS parameters are then used to adjust the DSMC transport coefficients in order to achieve a best-fit to the coefficients computed from more realistic intermolecular potentials over a range of temperatures. The best-fit collision model parameters are determined for both collision-averaged and collision-specific pairing approaches using the Nelder-Mead simplex algorithm. A consistent treatment of the diffusion, viscosity, and thermal conductivities is presented, and recommended sets of best-fit VHS and VSS collision model parameters are provided for a five-species air mixture.
Gardner, R.P.; Verghese, K.; Prettyman, T.H.; Mickael, M.
1988-01-01
In Monte Carlo simulation, one is interested in the particle tracking process in two major ways. The first of these is in simply keeping track of where the particle is in relation to the various homogeneous zones that are used to simulate the spatial domain of interest through which the particle moves randomly in phase space. The second of these, which is becoming more and more important, is in forcing particles either in a preferred general direction (direction biasing) or to exactly intersect a particular zone. This latter process is essential in the statistical estimation variance reduction process in which the result of interest (e.g., particle detection) is forced to occur at each particle interaction point. These processes have many factors in common when treated mathematically. This paper describes experience in developing and using mathematical algorithms for these general tracking and forcing processes in the Monte Carlo modeling of nuclear well-logging tools and industrial tomographic devices.
NASA Astrophysics Data System (ADS)
Boyle, G.; Tattersall, W.; Robson, R. E.; White, Ron; Dujko, S.; Petrovic, Z. Lj.; Brunger, M. J.; Sullivan, J. P.; Buckman, S. J.; Garcia, G.
2013-09-01
An accurate quantitative understanding of the behavior of positrons in gaseous and soft-condensed systems is important for many technological applications as well as to fundamental physics research. Optimizing Positron Emission Tomography (PET) technology and understanding the associated radiation damage requires knowledge of how positrons interact with matter prior to annihilation. Modeling techniques developed for electrons can also be employed to model positrons, and these techniques can also be extended to account for the structural properties of the medium. Two complementary approaches have been implemented in the present work: kinetic theory and Monte Carlo simulations. Kinetic theory is based on the multi-term Boltzmann equation, which has recently been modified to include the positron-specific interaction processes of annihilation and positronium formation. Simultaneously, a Monte Carlo simulation code has been developed that can likewise incorporate positron-specific processes. An accurate quantitative understanding of the behavior of positrons in gaseous and soft-condensed systems is important for many technological applications as well as to fundamental physics research. Optimizing Positron Emission Tomography (PET) technology and understanding the associated radiation damage requires knowledge of how positrons interact with matter prior to annihilation. Modeling techniques developed for electrons can also be employed to model positrons, and these techniques can also be extended to account for the structural properties of the medium. Two complementary approaches have been implemented in the present work: kinetic theory and Monte Carlo simulations. Kinetic theory is based on the multi-term Boltzmann equation, which has recently been modified to include the positron-specific interaction processes of annihilation and positronium formation. Simultaneously, a Monte Carlo simulation code has been developed that can likewise incorporate positron-specific processes. Funding support from ARC (CoE and DP schemes).
Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation
Peterson, Jacob Ross
2014-04-23
at the Insti- tute for Nuclear Research of the Russian Academy of Science, Moscow, Russia [7, 4], MONK/MCBEND developed by Serco in the United Kingdom [8, 4], Geant4 a world- wide collaboration initially developed at CERN [9], and OpenMC developed origi- nally....2 Variance Reduction Techniques . . . . . . . . . . . . . . . . . . . . . 2 1.3 Previous Work on Exponentially-Convergent Monte Carlo . . . . . . . 3 1.4 Motivation for Current Work . . . . . . . . . . . . . . . . . . . . . . . 4 2. ALGORITHM DESCRIPTIONS...
Cascaded two-photon spectroscopy of Yb atoms with a transportable effusive atomic beam apparatus
Song, Minsoo; Yoon, Tai Hyun [Department of Physics, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713 (Korea, Republic of)
2013-02-15
We present a transportable effusive atomic beam apparatus for cascaded two-photon spectroscopy of the dipole-forbidden transition (6s{sup 2} {sup 1}S{sub 0}{r_reversible} 6s7s {sup 1}S{sub 0}) of Yb atoms. An ohmic-heating effusive oven is designed to have a reservoir volume of 1.6 cm{sup 3} and a high degree of atomic beam collimation angle of 30 mrad. The new atomic beam apparatus allows us to detect the spontaneously cascaded two-photons from the 6s7s{sup 1}S{sub 0} state via the intercombination 6s6p{sup 3}P{sub 1} state with a high signal-to-noise ratio even at the temperature of 340 Degree-Sign C. This is made possible in our apparatus because of the enhanced atomic beam flux and superior detection solid angle.
Monte Carlo simulation of light transport in dark-field confocal photoacoustic microscopy
Zhixing Xie; Lihong V. Wang; Hao F. Zhang
2009-01-01
A modified MC convolution method for integration extension of MC simulation is developed for finite photon beam with random shape of translational or rotational invariance, which is proven consistent with the conventional convolution extension of MC simulation for normal incident finite beam. The method is applied to analyze the positions of fluence foci and ratios of fluence at the focus
Poludniowski, Gavin G. [Joint Department of Physics, Division of Radiotherapy and Imaging, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT, United Kingdom and Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom); Evans, Philip M. [Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.
NASA Astrophysics Data System (ADS)
Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.
2015-01-01
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6–15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in ‘sub-equilibrium’ small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance ‘upstream’ from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with ‘exact’ MC-derived values for photon energies from 5–25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV–25?MeV photons in the three materials, showing close agreement with the MC-derived values.
Kumar, Sudhir; Deshpande, Deepak D; Nahum, Alan E
2015-01-21
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in 'sub-equilibrium' small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for X?, the distance 'upstream' from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: X?(emp) ? 0.5R(csda)(E?(0)), where E?(0) is the mean initial secondary electron energy. These X?(emp) agree well with 'exact' MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values. PMID:25548933
Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.
Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks
Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit; Boozer, Allen H.
2012-04-11
Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta#14;f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear #14;B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.
Duncan, James S.
PNNL-SA-33487 for Radiation Physics, Particle Transport Simulation and Applications 23-26 October, 2000. Radiation and Health Technology, Pacific Northwest National Laboratory, Richland, WA, 99352, USA. Abstract patients into Monte Carlo dosimetry calculations, efforts to further improve the effectiveness of radiation
Lian, C P L; Othman, M A R; Cutajar, D; Butson, M; Guatelli, S; Rosenfeld, A B
2011-06-01
Skin dose is often the quantity of interest for radiological protection, as the skin is the organ that receives maximum dose during kilovoltage X-ray irradiations. The purpose of this study was to simulate the energy response and the depth dose water equivalence of the MOSkin radiation detector (Centre for Medical Radiation Physics (CMRP), University of Wollongong, Australia), a MOSFET-based radiation sensor with a novel packaging design, at clinical kilovoltage photon energies typically used for superficial/orthovoltage therapy and X-ray CT imaging. Monte Carlo simulations by means of the Geant4 toolkit were employed to investigate the energy response of the CMRP MOSkin dosimeter on the surface of the phantom, and at various depths ranging from 0 to 6 cm in a 30 × 30 × 20 cm water phantom. By varying the thickness of the tissue-equivalent packaging, and by adding thin metallic foils to the existing design, the dose enhancement effect of the MOSkin dosimeter at low photon energies was successfully quantified. For a 5 mm diameter photon source, it was found that the MOSkin was water equivalent to within 3% at shallow depths less than 15 mm. It is recommended that for depths larger than 15 mm, the appropriate depth dose water equivalent correction factors be applied to the MOSkin at the relevant depths if this detector is to be used for depth dose assessments. This study has shown that the Geant4 Monte Carlo toolkit is useful for characterising the surface energy response and depth dose behaviour of the MOSkin. PMID:21559885
Monte Carlo Monte Carlo at Work by Gary D. Doolen and John Hendricks E very second nearly 10,000,000,000 "random" numbers are being generated on computers around the world for Monte Carlo solutions to problems hundreds of full-time careers invested in the fine art of generating Monte Carlo solutions--a livelihood
NASA Astrophysics Data System (ADS)
Ding, D.; Chen, X.; Minnich, A. J.
2014-04-01
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
Ding, D.; Chen, X.; Minnich, A. J., E-mail: aminnich@caltech.edu [Division of Engineering and Applied Science, California Institute of Technology, Pasadena, California 91125 (United States)
2014-04-07
Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.
Correlated two-photon transport in a one-dimensional waveguide side-coupled to a nonlinear cavity
Liao Jieqiao; Law, C. K. [Department of Physics and Institute of Theoretical Physics, Chinese University of Hong Kong, Shatin, Hong Kong Special Administrative Region (Hong Kong)
2010-11-15
We investigate the transport properties of two photons inside a one-dimensional waveguide side-coupled to a single-mode nonlinear cavity. The cavity is filled with a nonlinear Kerr medium. Based on the Laplace transform method, we present an analytic solution for the quantum states of the two transmitted and reflected photons, which are initially prepared in a Lorentzian wave packet. The solution reveals how quantum correlation between the two photons emerges after the scattering by the nonlinear cavity. In particular, we show that the output wave function of the two photons in position space can be localized in relative coordinates, which is a feature that might be interpreted as a two-photon bound state in this waveguide-cavity system.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
Core-scale solute transport model selection using Monte Carlo analysis
Bwalya Malama; Kristopher L. Kuhlman; Scott C. James
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium inline image and sodium-22 inline image, and the retarding solute uranium-232 inline image. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Krolik, Julian H., E-mail: jeremy.schnittman@nasa.gov, E-mail: jhk@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
NASA Astrophysics Data System (ADS)
Hennad, A.; Yousfi, M.
2011-01-01
The ion swarm transport coefficients such as reduced mobility, diffusion coefficients and reaction rates of the following systems Ar+/Cl2, Ar+/N2, N_2^+/Cl_{2} and N_2^+/Ar have been determined from a Monte Carlo simulation using calculated elastic and experimentally estimated inelastic collision cross sections. The elastic momentum transfer cross sections have been determined from a semi-classical JWKB approximation based on a rigid core interaction potential model. The inelastic cross sections have been fitted using the measured reaction coefficients as for instance ion conversion reaction coefficients. Then, the cross section sets are fitted using either the measured reduced mobility when available in the literature or the zero-field mobility calculated from Satoh's relation. From the sets of elastic and inelastic collision cross sections thus obtained in Ar+/Cl2, Ar+/N2, N_2^+/Cl_{2} and N_2^+/Ar systems, the ion transport and reaction coefficients are then calculated in pure gases and also in binary and ternary mixtures involving Cl2, Ar and N2 over a wide range of reduced electric field. These ion data are very useful for modelling and simulation of non-equilibrium low pressure electrical discharges used more particularly for etching of the III-V compounds in the case of crystal photonic applications.
NASA Astrophysics Data System (ADS)
Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad
2015-02-01
Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.
Superposition dose calculation incorporating Monte Carlo generated electron track kernels.
Keall, P J; Hoban, P W
1996-04-01
The superposition/convolution method and the transport of pregenerated Monte Carlo electron track data have been combined into the Super-Monte Carlo (SMC) method, an accurate 3-D x-ray dose calculation algorithm. The primary dose (dose due to electrons ejected by primary photons) is calculated by transporting pregenerated (in water) Monte Carlo electron tracks from each primary photon interaction site, weighted by the terma for that site. The length of each electron step is scaled by the inverse of the density of the medium at the beginning of the step. Because the density scaling of the electron tracks is performed for each individual transport step, the limitations of the macroscopic scaling of kernels (in the superposition algorithm) are overcome. This time-consuming step-by-step transport is only performed for the primary dose calculation, where current superposition methods are most lacking. The scattered dose (dose due to electrons set in motion by scattered photons) is calculated by superposition. In both a water-lung-water phantom and a two lung-block phantom, SMC dose distributions are more consistent with Monte Carlo generated dose distributions than are superposition dose distributions, especially for small fields and high energies-for an 18-MV, 5 X 5-cm(2) beam, the central axis dose discrepancy from Monte Carlo is reduced from 4.5% using superposition to 1.5% using SMC. The computation time for this technique is approximately 2 h (depending on the simulation history), 20 times slower than superposition, but 15 times faster than a full Monte Carlo simulation (on our platform). PMID:9157258
Characterization of photonic bandgap fiber for high-power narrow-linewidth optical transport
NASA Astrophysics Data System (ADS)
Bennett, Charlotte R.; Jones, David C.; Smith, Mark A.; Scott, Andrew M.; Lyngsoe, Jens K.; Jakobsen, Christian
2014-03-01
An investigation of the use of hollow-core photonic bandgap (PBG) fiber to transport high-power narrow-linewidth light is performed. In conventional fiber the main limitation in this case is stimulated Brillouin scattering (SBS) but in PBG fiber the overlap between the optical intensity and the silica that hosts the acoustic phonons is reduced. In this paper we show this should increase the SBS threshold to the multi-kW level even when including the non-linear interaction with the air in the core. A full model and experimental measurement of the SBS spectra is presented, including back-scatter into other optical modes besides the fundamental, and some of the issues of coupling high power into hollow-core fibers are discussed.
NASA Astrophysics Data System (ADS)
Wu, K. C.; Seefeldt, K. F.; Solomon, M. J.; Halloran, J. W.
2005-07-01
A general, quantitative relationship between the photon-transport mean free path (l*) and resin sensitivity (DP) in multiple-scattering alumina/monomer suspensions formulated for ceramic stereolithography is presented and experimentally demonstrated. A Mie-theory-based computational method with structure factor contributions to determine l* was developed. Planar-source diffuse transmittance experiments were performed on monodisperse and bimodal polystyrene/water and alumina/monomer systems to validate this computational tool. The experimental data support the application of this l* calculation method to concentrated suspensions composed of nonaggregating particles of moderately aspherical shape and log-normal size distribution. The values of DP are shown to be approximately five times that of l* in the tested ceramic stereolithography suspensions.
Wu, K.C.; Seefeldt, K.F.; Solomon, M.J.; Halloran, J.W. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Department of Chemical Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States)
2005-07-15
A general, quantitative relationship between the photon-transport mean free path (l*) and resin sensitivity (D{sub P}) in multiple-scattering alumina/monomer suspensions formulated for ceramic stereolithography is presented and experimentally demonstrated. A Mie-theory-based computational method with structure factor contributions to determine l* was developed. Planar-source diffuse transmittance experiments were performed on monodisperse and bimodal polystyrene/water and alumina/monomer systems to validate this computational tool. The experimental data support the application of this l* calculation method to concentrated suspensions composed of nonaggregating particles of moderately aspherical shape and log-normal size distribution. The values of D{sub P} are shown to be approximately five times that of l* in the tested ceramic stereolithography suspensions.
NASA Astrophysics Data System (ADS)
Peng, Kuan; Gao, Xinbo; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; He, Xiaowei; Wang, Xiaorei; Liang, Jimin; Tian, Jie
2011-07-01
As a widely used numerical solution for the radiation transport equation (RTE), the discrete ordinates can predict the propagation of photons through biological tissues more accurately relative to the diffusion equation. The discrete ordinates reduce the RTE to a serial of differential equations that can be solved by source iteration (SI). However, the tremendous time consumption of SI, which is partly caused by the expensive computation of each SI step, limits its applications. In this paper, we present a graphics processing unit (GPU) parallel accelerated SI method for discrete ordinates. Utilizing the calculation independence on the levels of the discrete ordinate equation and spatial element, the proposed method reduces the time cost of each SI step by parallel calculation. The photon reflection at the boundary was calculated based on the results of the last SI step to ensure the calculation independence on the level of the discrete ordinate equation. An element sweeping strategy was proposed to detect the calculation independence on the level of the spatial element. A GPU parallel frame called the compute unified device architecture was employed to carry out the parallel computation. The simulation experiments, which were carried out with a cylindrical phantom and numerical mouse, indicated that the time cost of each SI step can be reduced up to a factor of 228 by the proposed method with a GTX 260 graphics card.
Francis F. Badavi; Steve R. Blattnig; William Atwell; John E. Nealy; Ryan B. Norman
2011-01-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995–2003) heavy ion counter (HIC), the choice of
Francis F. Badavi; Steve R. Blattnig; William Atwell; John E. Nealy; Ryan B. Norman
2011-01-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang [State Key Lab for Mesoscopic Physics, School of Physics, Peking University, Beijing 100871 (China)
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
Status of Monte Carlo at Los Alamos
Thompson, W.L.; Cashwell, E.D.
1980-01-01
At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.
The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
Rhoades, W.A.; Simpson, D.B.
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
Rivard, Mark J.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Department of Radiation Oncology, ERESA, Hospital General Universitario, E-46014 Valencia (Spain); Department of Radiation Oncology, La Fe University Hospital, E-46009 Valencia (Spain); Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, E-46100 Burjassot, Spain and IFIC, CSIC-University of Valencia, E-46100 Burjassot (Spain)
2010-02-15
Purpose: For a given radionuclide, there are several photon spectrum choices available to dosimetry investigators for simulating the radiation emissions from brachytherapy sources. This study examines the dosimetric influence of selecting the spectra for {sup 192}Ir, {sup 125}I, and {sup 103}Pd on the final estimations of kerma and dose. Methods: For {sup 192}Ir, {sup 125}I, and {sup 103}Pd, the authors considered from two to five published spectra. Spherical sources approximating common brachytherapy sources were assessed. Kerma and dose results from GEANT4, MCNP5, and PENELOPE-2008 were compared for water and air. The dosimetric influence of {sup 192}Ir, {sup 125}I, and {sup 103}Pd spectral choice was determined. Results: For the spectra considered, there were no statistically significant differences between kerma or dose results based on Monte Carlo code choice when using the same spectrum. Water-kerma differences of about 2%, 2%, and 0.7% were observed due to spectrum choice for {sup 192}Ir, {sup 125}I, and {sup 103}Pd, respectively (independent of radial distance), when accounting for photon yield per Bq. Similar differences were observed for air-kerma rate. However, their ratio (as used in the dose-rate constant) did not significantly change when the various photon spectra were selected because the differences compensated each other when dividing dose rate by air-kerma strength. Conclusions: Given the standardization of radionuclide data available from the National Nuclear Data Center (NNDC) and the rigorous infrastructure for performing and maintaining the data set evaluations, NNDC spectra are suggested for brachytherapy simulations in medical physics applications.
NASA Astrophysics Data System (ADS)
Mølmer, Klaus; Bay, Søren
1999-01-01
Quang and John have, in a recent paper [Phys. Rev. A 56, 4273 (1997)], proposed a dressed-state Monte Carlo wave-function (MCWF) approach to a laser-driven two-level atom coupled to a structured radiation reservoir. In this Comment, we argue that this approach is at variance with the underlying physical idea of the MCWF technique, that it is at variance with basic formal requirements for the MCWF technique to apply, and that it produces spurious and unphysical quantitative results for wide ranges of parameters.
Preston, M F; Annand, J R M; Fissum, K G; Hansen, K; Isaksson, L; Jebali, R; Lundin, M
2013-01-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
NASA Astrophysics Data System (ADS)
Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.
2014-04-01
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
Henry Wang; Yunzhi Ma; Guillem Pratx; Lei Xing
2011-01-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy.
Marc H. Lauterbach; Jörg Lehmann; Ulf F. Rosenow
1999-01-01
The Monte Carlo electron transport code EGS4 was benchmark tested against early experimental results derived by Freyberger. These consist of absolute depth ionization and depth dose curves measured at a pencil beam with sharp energy definition of nominally 4, 10 and 20MeV electrons extracted from a Betatron. The Freyberger precision measurements have been made with a wide plane-parallel ionization chamber
Marc H. Lauterbach; Jörg Lehmann; Ulf F. Rosenow
1999-01-01
The Monte Carlo electron transport code EGS4 was benchmark tested against early experimental results derived by Freyberger. These consist of absolute depth ionization and depth dose curves measured at a pencil beam with sharp energy definition of nominally 4, 10 and 20 MeV electrons extracted from a Betatron. The Freyberger precision measurements have been made with a wide plane-parallel ionization
Chin, P.W. [Department of Medical Physics, Velindre Cancer Centre, Velindre Road, Cardiff CF14 2TL (United Kingdom)]. E-mail: mary.chin@physics.org
2005-10-15
This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.
A full band Monte-Carlo study of carrier transport properties of InAlN lattice matched to GaN
NASA Astrophysics Data System (ADS)
Shishehchi, Sara; Bertazzi, Francesco; Bellotti, Enrico
2013-03-01
The growing importance of In0:18Al0:82N stems from the fact that it can be grown lattice matched to GaN and for its potential applications in a large number of electronics and optoelectronics devices. In this work we employed a full band Monte-Carlo approach to study the carrier transport properties of this alloy. We have computed the temperature and doping dependent electron and hole mobilities and drift velocities. Furthermore, for both sets of transport coefficients we have developed a number of analytical expressions that can be easily incorporated in drift-diffusion type simulation codes.
Kinetic Monte Carlo model of charge transport in hematite ({alpha}-Fe{sub 2}O{sub 3})
Kerisit, Sebastien; Rosso, Kevin M. [Chemical and Materials Sciences Division, Pacific Northwest National Laboratory, Richland, Washington 99354 (United States)
2007-09-28
The mobility of electrons injected into iron oxide minerals via abiotic and biotic electron transfer processes is one of the key factors that control the reductive dissolution of such minerals. Building upon our previous work on the computational modeling of elementary electron transfer reactions in iron oxide minerals using ab initio electronic structure calculations and parametrized molecular dynamics simulations, we have developed and implemented a kinetic Monte Carlo model of charge transport in hematite that integrates previous findings. The model aims to simulate the interplay between electron transfer processes for extended periods of time in lattices of increasing complexity. The electron transfer reactions considered here involve the II/III valence interchange between nearest-neighbor iron atoms via a small polaron hopping mechanism. The temperature dependence and anisotropic behavior of the electrical conductivity as predicted by our model are in good agreement with experimental data on hematite single crystals. In addition, we characterize the effect of electron polaron concentration and that of a range of defects on the electron mobility. Interaction potentials between electron polarons and fixed defects (iron substitution by divalent, tetravalent, and isovalent ions and iron and oxygen vacancies) are determined from atomistic simulations, based on the same model used to derive the electron transfer parameters, and show little deviation from the Coulombic interaction energy. Integration of the interaction potentials in the kinetic Monte Carlo simulations allows the electron polaron diffusion coefficient and density and residence time around defect sites to be determined as a function of polaron concentration in the presence of repulsive and attractive defects. The decrease in diffusion coefficient with polaron concentration follows a logarithmic function up to the highest concentration considered, i.e., {approx}2% of iron(III) sites, whereas the presence of repulsive defects has a linear effect on the electron polaron diffusion. Attractive defects are found to significantly affect electron polaron diffusion at low polaron to defect ratios due to trapping on nanosecond to microsecond time scales. This work indicates that electrons can diffuse away from the initial site of interfacial electron transfer at a rate that is consistent with measured electrical conductivities, but that the presence of certain kinds of defects will severely limit the mobility of donated electrons.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Nadamoto, Ken; Okada, Eiji; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-03-01
Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.
Transport calculations for a 14.8 MeV neutron beam in a water phantom
NASA Astrophysics Data System (ADS)
Goetsch, S. J.
A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
NASA Astrophysics Data System (ADS)
Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.
2010-12-01
Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.
NASA Astrophysics Data System (ADS)
Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.
2013-03-01
Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4???/S<2.5/4?, in the temperature range from 170 to 440 MeV.
Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)
2012-07-01
Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)
NASA Astrophysics Data System (ADS)
Ueki, Taro; Larsen, Edward W.
1998-09-01
We show that Monte Carlo simulations of neutral particle transport in planar-geometry anisotropically scattering media, using the exponential transform with angular biasing as a variance reduction device, are governed by a new "Boltzmann Monte Carlo" (BMC) equation, which includes particle weight as an extra independent variable. The weight moments of the solution of the BMC equation determine the moments of the score and the mean number of collisions per history in the nonanalog Monte Carlo simulations. Therefore, the solution of the BMC equation predicts the variance of the score and the figure of merit in the simulation. Also, by (i) using an angular biasing function that is closely related to the "asymptotic" solution of the linear Boltzmann equation and (ii) requiring isotropic weight changes at collisions, we derive a new angular biasing scheme. Using the BMC equation, we propose a universal "safe" upper limit of the transform parameter, valid for any type of exponential transform. In numerical calculations, we demonstrate that the behavior of the Monte Carlo simulations and the performance predicted by deterministically solving the BMC equation agree well, and that the new angular biasing scheme is always advantageous.
NASA Astrophysics Data System (ADS)
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
Yu-Tai Li; J.-W. Shi; Ci-Ling Pan; C.-H. Chiu; W.-S. Liu; Nan-Wei Chen; C.-K. Sun; J.-I. Chyi
2007-01-01
We demonstrate a novel photonic transmitter, which is composed of a low-temperature-grown GaAs (LTG-GaAs)-based separated-transport-recombination photodiode and a micromachined slot antenna. Under femtosecond optical pulse illumination, this device radiates strong electrical pulses (4.5-mW peak power) without the use of a Si-lens. It can be observed in the Fourier transform infrared spectrometer spectrum of radiated pulses that a significant resonance, with
F. Capasso; T. P. Pearsall; K. K. Thornber; R. E. Nahory; M. A. Pollack; G. B. Bachelet; J. R. Chelikowsky
1982-01-01
Recent theoretical work by Shichijo, Hess, and Stillman on a Monte Carlo simulation of high-field transport and impact ionization in GaAs is examined. The failure of that calculation to reproduce the experimentally well-documented orientation dependence of impact ionization can be directly related to the use of a phonon scattering rate that is unrealistically high. It is shown that such high
NASA Astrophysics Data System (ADS)
Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.
2010-04-01
Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
NASA Astrophysics Data System (ADS)
Sundaresan, Sasi; Jayasekera, Thushari; Ahmed, Shaikh
2014-03-01
Monte Carlo based statistical approach to solve Boltzmann Transport Equation (BTE) has become a norm to investigate heat transport in semiconductors at sub-micron regime, owing to its ability to characterize realistically sized device geometries qualitatively. One weakness of this technique is that the approach predominantly uses empirically fitted phonon dispersion relation as input to determine the properties of phonons and predict the thermal conductivity for a specified material geometry. The empirically fitted dispersion relations assume harmonic approximation, thereby failing to account for thermal expansion, effects of strain on spring stiffness, and accurate phonon-phonon interactions. To account for the anharmonic contributions in the calculation of thermal conductivity, in this work, we employ a coupled molecular mechanics-Monte Carlo (MM-MC) approach. The atomistically-resolved non-deterministic approach adopted in this work is found to produce satisfactory results on heat transport and thermal conductivity in both ballistic and diffusive regimes for III-N nanostructures. Supported by the U.S. National Science Foundation Grant No. CCF-1218839.
Koh, Wonshill
2013-02-22
the propagation of light in turbid media with scattering particles because of its effectiveness and accuracy in approaching photon transport in turbid media. The existing Monte Carlo model developed at the Optical Imaging Lab at Texas A&M University has been...
Modelling plastic scintillator response to gamma rays using light transport incorporated FLUKA code.
Ranjbar Kohan, M; Etaati, G R; Ghal-Eh, N; Safari, M J; Afarideh, H; Asadi, E
2012-05-01
The response function of NE102 plastic scintillator to gamma rays has been simulated using a joint FLUKA+PHOTRACK Monte Carlo code. The multi-purpose particle transport code, FLUKA, has been responsible for gamma transport whilst the light transport code, PHOTRACK, has simulated the transport of scintillation photons through scintillator and lightguide. The simulation results of plastic scintillator with/without light guides of different surface coverings have been successfully verified with experiments. PMID:22341953
Analysis of photon beam exit dose using photon point kernels
M. K. Woo
1994-01-01
The Monte Carlo method is used to analyse the dose fall-off at the exit surface of a megavoltage photon beam. The convolution\\/superposition method of dose calculation using Monte-Carlo-generated homogeneous photon kernels is shown to be in error for exit dose calculation. Instead, photon kernels that incorporate modelling of the exit surface were generated, also using Monte Carlo, to analyse the
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States) and Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States); Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Wagner, John C [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
Wang, Y Y; Peng, Xiang; Alharbi, M; Dutin, C Fourcade; Bradley, T D; Gérôme, F; Mielke, Michael; Booth, Timothy; Benabid, F
2012-08-01
We report on the recent design and fabrication of kagome-type hollow-core photonic crystal fibers for the purpose of high-power ultrashort pulse transportation. The fabricated seven-cell three-ring hypocycloid-shaped large core fiber exhibits an up-to-date lowest attenuation (among all kagome fibers) of 40 dB/km over a broadband transmission centered at 1500 nm. We show that the large core size, low attenuation, broadband transmission, single-mode guidance, and low dispersion make it an ideal host for high-power laser beam transportation. By filling the fiber with helium gas, a 74 ?J, 850 fs, and 40 kHz repetition rate ultrashort pulse at 1550 nm has been faithfully delivered at the fiber output with little propagation pulse distortion. Compression of a 105 ?J laser pulse from 850 fs down to 300 fs has been achieved by operating the fiber in ambient air. PMID:22859102
Fang, Yuan, E-mail: yuan.fang@fda.hhs.gov [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Karim, Karim S. [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)
2014-01-15
Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
S. Hughes; C. Roy
2012-01-12
We present a semiconductor master equation technique to study the input/output characteristics of coherent photon transport in a semiconductor waveguide-cavity system containing a single quantum dot. We use this approach to investigate the effects of photon propagation and anharmonic cavity-QED for various dot-cavity interaction strengths, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down, even in the weak coupling regime. As a measure of the anharmonic multiphoton-correlations, we compute the Fano factor and the correlation error associated with making a semiclassical approximation. We also explore the role of electron--acoustic-phonon scattering and find that phonon-mediated scattering plays a qualitatively important role on the light propagation characteristics. As an application of the theory, we simulate a conditional phase gate at a phonon bath temperature of $20 $K in the strong coupling regime.
NASA Astrophysics Data System (ADS)
Hughes, S.; Roy, C.
2012-01-01
We present a semiconductor master equation technique to study the input/output characteristics of coherent photon transport in a semiconductor waveguide-cavity system containing a single quantum dot. We use this approach to investigate the effects of photon propagation and anharmonic cavity-QED for various dot-cavity interaction strengths, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down, even in the weak coupling regime. As a measure of the multiphoton correlations, we compute the Fano factor and the correlation error associated with making a semiclassical approximation. We also explore the role of electron-acoustic-phonon scattering and find that phonon-mediated scattering plays a qualitatively important role on the light propagation characteristics. As an application of the theory, we simulate a conditional phase gate at a phonon bath temperature of 20 K in the strong coupling regime.
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-08-23
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.
A finite element-spherical harmonics radiation transport model for photon migration in turbid media
E. D. Aydin; C. R. E. de Oliveira; A. J. H. Goddard
2004-01-01
In this paper, we solve the steady-state form of the Boltzmann transport equation in homogeneous and heterogeneous tissue-like media with a finite element-spherical harmonics (FE-PN) radiation transport method. We compare FE-transport and diffusion solutions in terms of the ratio of absorption to reduced scattering coefficient, (?a\\/?s?) and the anisotropy factor g. Two different scattering phase function formulas are employed to
Zandbergen, Sander R; de Dood, Michiel J A
2010-01-29
Photonic graphene is a two-dimensional photonic crystal structure that is analogous to graphene. We use 5 mm diameter Al2O3 rods placed on a triangular lattice with a lattice constant a=8 mm to create an isolated conical singularity in the photonic band structure at a microwave frequency of 17.6 GHz. At this frequency, the measured transmission of microwaves through a perfectly ordered structure enters a pseudodiffusive regime where the transmission scales inversely with the thickness L of the crystal (L/a greater than or approximately 5). The transmission depends critically on the configuration of the edges: distinct oscillations with an amplitude comparable to the transmission are observed for structures terminated with zigzag edges, while these oscillations are absent for samples with a straight edge configuration. PMID:20366713
Sulkanen, M.E.; Gisler, G.R.
1989-01-01
This present study constitutes the first attempt to include, in a particle-in-cell code, the effects of radiation losses, photon production and transport, and charged-particle production by photons scattering in an intense background magnetic field. We discuss the physics and numerical issues that had to be addressed in including these effects in the ISIS code. Then we present a test simulation of the propagation of a pulse of high-energy photons across an intense magnetic field using this modified version of ISIS. This simulation demonstrates dissipation of the photon pulse with charged-particle production, emission of secondary synchrotron and curvature photons and the concomitant momentum dissipation of the charged particles, and subsequent production of lower-energy pairs. 5 refs.
Koš?ál, Michal; Cvachovec, František; Mil?ák, Ján; Mravec, Filip
2013-05-01
The paper is intended to show the effect of a biological shielding simulator on fast neutron and photon transport in its vicinity. The fast neutron and photon fluxes were measured by means of scintillation spectroscopy using a 45×45 mm(2) and a 10×10 mm(2) cylindrical stilbene detector. The neutron spectrum was measured in the range of 0.6-10 MeV and the photon spectrum in 0.2-9 MeV. The results of the experiment are compared with calculations. The calculations were performed with various nuclear data libraries. PMID:23434890
Burns, T.J.
1994-03-01
An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.
Periyasamy, Vijitha; Pramanik, Manojit
2014-04-01
Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. PMID:24727908
NASA Astrophysics Data System (ADS)
Belhadji, Youcef; Bouazza, Benyounes; Moulahcene, Fateh; Massoum, Nordine
2015-05-01
In a comparative framework, an ensemble Monte Carlo was used to elaborate the electron transport characteristics in two different silicon carbide (SiC) polytypes 3C-SiC and 4H-SiC. The simulation was performed using three-valley band structure model. These valleys are spherical and nonparabolic. The aim of this work is to forward the trajectory of 20,000 electrons under high-flied (from 50 kV to 600 kV) and high-temperature (from 200 K to 700 K). We note that this model has already been used in other studies of many Zincblende or Wurtzite semiconductors. The obtained results, compared with results found in many previous studies, show a notable drift velocity overshoot. This last appears in subpicoseconds transient regime and this overshoot is directly attached to the applied electric field and lattice temperature.
Wang Haifeng [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)], E-mail: hw98@cornell.edu; Popov, Pavel P.; Pope, Stephen B. [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)
2010-03-01
We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.
Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310
Photon-electron energy deposition in CANDU reactor channels: Simulation and modelling
NASA Astrophysics Data System (ADS)
Abdelbaky, M. E. A.; Hussein, E. M. A.; McCracken, D. R.
1996-06-01
Simulation of photon-electron transport in a CANDU reactor fuel channel using the Monte Carlo method for calculating the energy deposition in the coolant is studied. The geometry of the CANDU fuel channel is very complex so methods that make such simulations more practicable, without adversely affecting the results, are introduced. In this regard, the use of simplifying assumptions and simplified geometrical models on the performance of two different Monte Carlo codes has been compared. An ETRAN-based code (SANDYL), and the code EGS4 produced comparable results, although the former performs faster in accounting for low energy electrons. A simplified computational model is also introduced. This model is based on decoupling photon-electron transport simulations by the use of electron-energy-transfer functions. The results obtained using the model are successfully validated using the EGS4 and SANDYL codes. A significant computational speedup (about a factor of seven compared to Monte Carlo simulations) is achieved with this model.
N. L. Maidana; L. Brualla; V. R. Vanin; J. R. B. Oliveira; M. A. Rizzutto; E. do Nascimento; J. M. Fernández-Varea
2010-01-01
The triple- and quadruple-escape peaks of 6.128MeV photons from the F(p,??)19O16 nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code penelope. The
NASA Astrophysics Data System (ADS)
Lin, Shi-Zeng; Ayala-Valenzuela, Oscar; McDonald, Ross D.; Bulaevskii, Lev N.; Holesinger, Terry G.; Ronning, Filip; Weisse-Bernstein, Nina R.; Williamson, Todd L.; Mueller, Alexander H.; Hoffbauer, Mark A.; Rabin, Michael W.; Graf, Matthias J.
2013-05-01
The fabrication of high-quality thin superconducting films is essential for single-photon detectors. Their device performance is crucially affected by their material parameters, thus requiring reliable and nondestructive characterization methods after the fabrication and patterning processes. Important material parameters to know are the resistivity, superconducting transition temperature, relaxation time of quasiparticles, and uniformity of patterned wires. In this work, we characterize micropatterned thin NbN films by using transport measurements in magnetic fields. We show that from the instability of vortex motion at high currents in the flux-flow state of the IV characteristic, the inelastic lifetime of quasiparticles can be determined to be about 2 ns. Additionally, from the depinning transition of vortices at low currents, as a function of magnetic field, the size distribution of grains can be extracted. This size distribution is found to be in agreement with the film morphology obtained from scanning electron microscopy and high-resolution transmission electron microscopy images.
Hach, Edwin E; Preble, Stefan F
2010-01-01
We analyze the dynamics of single photon transport in a single-mode waveguide coupled to a micro-optical resonator using a fully quantum mechanical model. We examine the propagation of a single-photon Gaussian packet through the system under various coupling conditions. We review the theory of single photon transport phenomena as applied to the system and we develop a discussion on the numerical technique we used to solve for dynamical behavior of the quantized field. To demonstrate our method and to establish robust single photon results, we study the process of adiabatically lowering or raising the energy of a single photon trapped in an optical resonator under active tuning of the resonator. We show that our fully quantum mechanical approach reproduces the semi-classical result in the appropriate limit and that the adiabatic invariant has the same form in each case. Finally, we explore the trapping of a single photon in a system of dynamically tuned, coupled optical cavities.
Edwin E. Hach III; Ali W. Elshaari; Stefan F. Preble
2010-10-12
We analyze the dynamics of single photon transport in a single-mode waveguide coupled to a micro-optical resonator using a fully quantum mechanical model. We examine the propagation of a single-photon Gaussian packet through the system under various coupling conditions. We review the theory of single photon transport phenomena as applied to the system and we develop a discussion on the numerical technique we used to solve for dynamical behavior of the quantized field. To demonstrate our method and to establish robust single photon results, we study the process of adiabatically lowering or raising the energy of a single photon trapped in an optical resonator under active tuning of the resonator. We show that our fully quantum mechanical approach reproduces the semi-classical result in the appropriate limit and that the adiabatic invariant has the same form in each case. Finally, we explore the trapping of a single photon in a system of dynamically tuned, coupled optical cavities.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.
High-speed DC transport of emergent monopoles in spinor photonic fluids.
Terças, H; Solnyshkov, D D; Malpuech, G
2014-07-18
We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles. PMID:25083658
Coherent photon transport from spontaneous emission in one-dimensional waveguides
J. T. Shen; Shanhui Fan
2005-01-01
A two-level system coupled to a one-dimensional continuum is investigated. By using a real-space model Hamiltonian, we show that spontaneous emission can coherently interfere with the continuum modes and gives interesting transport properties. The technique is applied to various related problems with different configurations, and analytical solutions are given.
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
Martino, G; Capasso, M; Nasuti, M; Bonanni, L; Onofrj, M; Thomas, A
2015-04-01
Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the "burst striatum" pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755
SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-01
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.
Khromova, A N; Arfelli, F; Menk, R H; Besch, H J; Plothow-Besch, H; 10.1109/NSSMIC.2004.1466758
2010-01-01
In this work we present a novel 3D Monte Carlo photon transport program for simulation of multiple refractive scattering based on the refractive properties of X-rays in highly scattering media, like lung tissue. Multiple scattering reduces not only the quality of the image, but contains also information on the internal structure of the object. This information can be exploited utilizing image modalities such as Diffraction Enhanced Imaging (DEI). To study the effect of multiple scattering a Monte Carlo program was developed that simulates multiple refractive scattering of X-ray photons on monodisperse PMMA (poly-methyl-methacrylate) microspheres representing alveoli in lung tissue. Eventually, the results of the Monte Carlo program were compared to the measurements taken at the SYRMEP beamline at Elettra (Trieste, Italy) on special phantoms showing a good agreement between both data.
Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code
Rhoades, W.A.; Childs, R.L.
1982-07-01
DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.
NASA Astrophysics Data System (ADS)
Millman, David L.; Griesheimer, David P.; Nease, Brian R.; Snoeyink, Jack
2014-06-01
For large, highly detailed models, Monte Carlo simulations may spend a large fraction of their run-time performing simple point location and distance to surface calculations for every geometric component in a model. In such cases, the use of bounding boxes (axis-aligned boxes that bound each geometric component) can improve particle tracking efficiency and decrease overall simulation run time significantly. In this paper we present a robust and efficient algorithm for generating the numerically-optimal bounding box (optimal to within a user-specified tolerance) for an arbitrary Constructive Solid Geometry (CSG) object defined by quadratic surfaces. The new algorithm uses an iterative refinement to tighten an initial, conservatively large, bounding box into the numerically-optimal bounding box. At each stage of refinement, the algorithm subdivides the candidate bounding box into smaller boxes, which are classified as inside, outside, or intersecting the boundary of the component. In cases where the algorithm cannot unambiguously classify a box, the box is refined further. This process continues until the refinement near the component's extremal points reach the user-selected tolerance level. This refinement/classification approach is more efficient and practical than methods that rely on computing actual boundary representations or sampling to determine the extent of an arbitrary CSG component. A complete description of the bounding box algorithm is presented, along with a proof that the algorithm is guaranteed to converge to within specified tolerance of the true optimal bounding box. The paper also provides a discussion of practical implementation details for the algorithm as well as numerical results highlighting performance and accuracy for several representative CSG components.
Configuration of the electron transport algorithm of PENELOPE to simulate ion chambers
J. Sempau; P. Andreo
2006-01-01
The stability of the electron transport algorithm implemented in the Monte Carlo code PENELOPE with respect to variations of its step length is analysed in the context of the simulation of ion chambers used in photon and electron dosimetry. More precisely, the degree of violation of the Fano theorem is quantified (to the 0.1% level) as a function of the
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
NASA Technical Reports Server (NTRS)
Stephens, D. L. Jr; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.
2002-01-01
Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R2 respectively. c2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.
Giden, I. H., E-mail: igiden@etu.edu.tr; Yilmaz, D.; Turduev, M.; Kurt, H. [Nanophotonics Research Laboratory, Department of Electrical and Electronics Engineering, TOBB University of Economics and Technology, Ankara 06560 (Turkey); Çolak, E. [Electrical and Electronics Engineering Department, Ankara University, Gölbasi, Ankara 06830 (Turkey); Ozbay, E. [Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey)
2014-01-20
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6?nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1?nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3?GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
M. Petersheim; J. Hesser; F. Wenz
Photon beams in the energy range of keV are more cost effective and require less shielding efforts compared to MeV-photons\\u000a or particle beams. The absent build-up effect for low-energetic X-rays however is challenging for percutaneous radiotherapy.\\u000a In this work, we study a possible improvement of the TCP-NTCP ratio (tumor control vs. normal tissue complication) by geometrically\\u000a superimposing different collimated beamlets.
NASA Technical Reports Server (NTRS)
Marshak, Alexander
2004-01-01
In my presentation, I will describe several approximation methods with different level of complexity; they will be gradually applied to simple examples of horizontally inhomogeneous clouds. Understanding of photon horizontal transport and radiative smoothing can help to improve accuracy of the methods The accuracy of the methods will be compared with the full Monte Carlo calculations. The specifics of Monte Carlo in cloudy atmospheres will be also discussed. A special emphasis will be put on the strong forward scattering peak in the phase functions.
C. Bousis; D. Emfietzoglou; P. Hadjidoukas; H. Nikjoo
2008-01-01
Monte Carlo transport calculations of dose point kernels (DPKs) and depth dose profiles (DDPs) in both the vapor and liquid phases of water are presented for electrons with initial energy between 10 keV and 1 MeV. The results are obtained by the MC4 code using three different implementations of the condensed-history technique for inelastic collisions, namely the continuous slowing down
Estes, G.P.; Schrandt, R.G.; Kriese, J.T.
1988-03-01
A patch to the Los Alamos Monte Carlo code MCNP has been developed that automates the generation of source descriptions for photons from arbitrary mixtures and configurations of radioactive isotopes. Photon branching ratios for decay processes are obtained from national and international data bases and accesed directly from computer files. Code user input is generally confined to readily available information such as density, isotopic weight fractions, atomic numbers, etc. of isotopes and material compositions. The availbility of this capability in conjunction with the ''generalized source'' capability of MCNP Version 3A makes possible the rapid and accurate description of photon sources from complex mixtures and configurations of radioactive materials, resulting in imporved radiation transport predictive capabilities. This capability is combined with a first - principles calculation of photon spectrometer response - functions for NaI, BGO, and HPGe for E..gamma.. )approxreverse arrowlt) 1 MeV. 25 refs., 1 fig., 4 tabs.
Photon-aided and photon-inhibited tunneling of photons
NASA Astrophysics Data System (ADS)
Liu, Xuele; Agarwal, G. S.
2013-06-01
In light of the interest in the transport of single photons in arrays of waveguides, fiber couplers, photonic crystals, etc., we consider the quantum mechanical process of the tunneling of photons through evanescently or otherwise coupled structures. We specifically examine the issue of tunneling between two structures when one structure already contains few photons. We demonstrate the possibility of both photon-aided and photon-inhibited tunneling of photons. The bosonic nature of photons enhances the tunneling probability. We also show how the multiphoton tunneling probability can be either enhanced or inhibited due to the presence of photons. We find similar results for higher-order tunneling. Finally, we show that the presence of a squeezed field changes the nature of tunneling considerably.
Analysis of single Monte Carlo methods for prediction of reflectance from turbid media.
Martinelli, Michele; Gardner, Adam; Cuccia, David; Hayakawa, Carole; Spanier, Jerome; Venugopalan, Vasan
2011-09-26
Starting from the radiative transport equation we derive the scaling relationships that enable a single Monte Carlo (MC) simulation to predict the spatially- and temporally-resolved reflectance from homogeneous semi-infinite media with arbitrary scattering and absorption coefficients. This derivation shows that a rigorous application of this single Monte Carlo (sMC) approach requires the rescaling to be done individually for each photon biography. We examine the accuracy of the sMC method when processing simulations on an individual photon basis and also demonstrate the use of adaptive binning and interpolation using non-uniform rational B-splines (NURBS) to achieve order of magnitude reductions in the relative error as compared to the use of uniform binning and linear interpolation. This improved implementation for sMC simulation serves as a fast and accurate solver to address both forward and inverse problems and is available for use at http://www.virtualphotonics.org/. PMID:21996904
Stephen K. O’Leary; Brian E. Foutz; Michael S. Shur; Lester F. Eastman
2010-01-01
Using a semi-classical three-valley Monte Carlo simulation approach, we analyze the steady-state and transient electron transport\\u000a that occurs within bulk wurtzite InN using a revised set of material parameters, this revised set of parameters taking into\\u000a account recently observed InN phenomenology. In particular, we examine how the steady-state and transient electron transport\\u000a that occurs within bulk wurtzite InN changes in
BOMAB phantom manufacturing quality assurance study using Monte Carlo computations
Mallett, M.W.
1994-01-01
Monte Carlo calculations have been performed to assess the importance of and quantify quality assurance protocols in the manufacturing of the Bottle-Manikin-Absorption (BOMAB) phantom for calibrating in vivo measurement systems. The parameters characterizing the BOMAB phantom that were examined included height, fill volume, fill material density, wall thickness, and source concentration. Transport simulation was performed for monoenergetic photon sources of 0.200, 0.662, and 1,460 MeV. A linear response was observed in the photon current exiting the exterior surface of the BOMAB phantom due to variations in these parameters. Sensitivity studies were also performed for an in vivo system in operation at the Pacific Northwest Laboratories in Richland, WA. Variations in detector current for this in vivo system are reported for changes in the BOMAB phantom parameters studied here. Physical justifications for the observed results are also discussed.
Stanford University
Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 for Monte Carlo applications to neutron transport problems; Lewis & Miller [1984] is a good source
Multigroup cross section generation via Monte Carlo methods
Everett Lee Redmond II
1997-01-01
Monte Carlo methods of performing radiation transport calculations are heavily used in many different applications. However, despite their prevalence, Monte Carlo codes do not eliminate the need for other methods of analysis like discrete ordinates transport codes or even diffusion theory codes. For example: current Monte Carlo codes are not capable of performing transient analysis or continuous energy adjoint calculations.
Joseph Kushner; Daekeun Kim; Peter T C So; Daniel Blankschtein; Robert S Langer
2007-01-01
Visualization of transdermal permeant pathways is necessary to substantiate model-based conclusions drawn using permeability data. The aim of this investigation was to visualize the transdermal delivery of sulforhodamine B (SRB), a fluorescent hydrophilic permeant, and of rhodamine B hexyl ester (RBHE), a fluorescent hydrophobic permeant, using dual-channel two-photon microscopy (TPM) to better understand the transport pathways and the mechanisms of
The Monte Carlo code MCSHAPE: Main features and recent developments
NASA Astrophysics Data System (ADS)
Scot, Viviana; Fernandez, Jorge E.
2015-06-01
MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.
Islam, M. Anwarul; Akramuzzaman, M. M.; Zakaria, G. A.
2012-01-01
Manufacturing of miniaturized high activity 192Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60Co HDR sources have been made available with identical dimensions to those of 192Ir sources. 60Co sources have an advantage of longer half life while comparing with 192Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60Co HDR and new microSelectron 192Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSlectron HDR 192Ir v2 and new BEBIG 60Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698×10-8 ± 0.55% U Bq-1 and 3.039×10-7 ± 0.41% U Bq-1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h-1U-1 and 1.097±0.12% cGy h-1U-1, respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using appropriate cut-off energy. No significant advantages or disadvantages are found in dosimetric aspect comparing with two sources. PMID:23293454
NASA Astrophysics Data System (ADS)
Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.
2012-11-01
Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P. [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece); Department of Physics, Nuclear and Particle Physics Section, University of Athens, Ilisia, 157 71 Athens (Greece); Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 115 27 Athens (Greece)
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
TOPICAL REVIEW: Dose calculations for external photon beams in radiotherapy
NASA Astrophysics Data System (ADS)
Ahnesjö, Anders; Mania Aspradakis, Maria
1999-11-01
Dose calculation methods for photon beams are reviewed in the context of radiation therapy treatment planning. Following introductory summaries on photon beam characteristics and clinical requirements on dose calculations, calculation methods are described in order of increasing explicitness of particle transport. The simplest are dose ratio factorizations limited to point dose estimates useful for checking other more general, but also more complex, approaches. Some methods incorporate detailed modelling of scatter dose through differentiation of measured data combined with various integration techniques. State-of-the-art methods based on point or pencil kernels, which are derived through Monte Carlo simulations, to characterize secondary particle transport are presented in some detail. Explicit particle transport methods, such as Monte Carlo, are briefly summarized. The extensive literature on beam characterization and handling of treatment head scatter is reviewed in the context of providing phase space data for kernel based and/or direct Monte Carlo dose calculations. Finally, a brief overview of inverse methods for optimization and dose reconstruction is provided.
Parallelizing Monte Carlo with PMC
Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.
1994-11-01
PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.
V. D. Rusova; V. A. Tarasova; D. A. Litvinova; S. V. Iaroshenkob
2004-01-01
The electron energy spectra, not connected to b-decay, of 235U- and\\u000a239Pu-films, irradiated by thermal neutrons, obtained by a Monte Carlo method\\u000ais presented in the given work. The modelling was performed with the help of a\\u000acomputer code MCNP4C (Monte Carlo Neutron Photon transport code system),\\u000aallowing to carry out the computer experiments on joint transport of neutrons,\\u000aphotons
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-?s Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon Calorimeter Geometry * Simulations with EGS4/PRESTA for Thin Si Sampling Calorimeter * SIBERIA -- Monte Carlo Code for Simulation of Hadron-Nuclei Interactions * CALOR89 Predictions for the Hanging File Test Configurations * Estimation of the Multiple Coulomb Scattering Error for Various Numbers of Radiation Lengths * Monte Carlo Generator for Nuclear Fragmentation Induced by Pion Capture * Calculation and Randomization of Hadron-Nucleus Reaction Cross Section * Developments in GEANT Physics * Status of the MC++ Event Generator Toolkit * Theoretical Overview of QCD Event Generators * Random Numbers? * Simulation of the GEM LKr Barrel Calorimeter Using CALOR89 * Recent Improvement of the EGS4 Code, Implementation of Linearly Polarized Photon Scattering * Interior-Flux Simulation in Enclosures with Electron-Emitting Walls * Some Recent Developments in Global Determinations of Parton Distributions * Summary of the Workshop on Simulating Accelerator Radiation Environments * Simulating the SDC Radiation Background and Activation * Applications of Cluster Monte Carlo Method to Lattice Spin Models * PDFLIB: A Library of All Available Parton Density Functions of the Nucleon, the Pion and the Photon and the Corresponding ?s Calculations * DTUJET92: Sampling Hadron Production at Supercolliders * A New Model for Hadronic Interactions at Intermediate Energies for the FLUKA Code * Matrix Generator of Pseudo-Random Numbers * The OPAL Monte Carlo Production System * Monte Carlo Simulation of the Microstrip Gas Counter * Inner Detector Simulations in ATLAS * Simulation and Reconstruction in H1 Liquid Argon Calorimetry * Polarization Decomposition of Fluxes and Kinematics in ep Reactions * Towards Object-Oriented GEANT -- ProdiG Project * Parallel Processing of AMY Detector Simulation on Fujitsu AP1000 * Enigma: An Event Generator for Electron-Photon- or Pion-Induced Events in the ~1 GeV Region * SSCSIM: Development and Use by the Fermilab SDC Group * The GEANT-CALOR Interface
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
NASA Astrophysics Data System (ADS)
Cranmer-Sargison, G.; Weston, S.; Evans, J. A.; Sidhu, N. P.; Thwaites, D. I.
2012-08-01
The goal of this work was to examine the use of simplified diode detector models within a recently proposed Monte Carlo (MC) based small field dosimetry formalism and to investigate the influence of electron source parameterization has on MC calculated correction factors. BEAMnrc was used to model Varian 6?MV jaw-collimated square field sizes down to 0.5 cm. The IBA stereotactic field diode (SFD), PTW T60016 (shielded) and PTW T60017 (un-shielded) diodes were modelled in DOSRZnrc and isocentric output ratios (OR_{\\det _{MC} }^{f_{clin} }) calculated at depths of d = 1.5, 5.0 and 10.0 cm. Simplified detector models were then tested by evaluating the percent difference in OR_{det_{MC} }^{f_{clin} }between the simplified and complete detector models. The influence of active volume dimension on simulated output ratio and response factor was also investigated. The sensitivity of each MC calculated replacement correction factor (\\mathop k\
Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas
2011-01-01
Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%?2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10?×?10 cm2 fields (over 26% passed) and in the bone region for 5?×?5 and 10?×?10 cm2 fields (over 64% passed). With the criterion relaxed to 5%?2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5?×?2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802
Monte Carlo-based dose-rate tables for the Amersham CDCS.J and 3M model 6500 137Cs tubes
Jeffrey F. Williamson
1998-01-01
Purpose: (1) To present reference-quality dose-rate distributions for the Amersham CDCS.J-type 137Cs intracavitary source (hitherto unavailable in the literature) and updated tables for the 3M model 6500\\/6D6C source. (2) To assess the accuracy of the widely used 1D pathlength (Sievert integral) algorithm for lightly filtered 137Cs tube sources.Methods and Materials: A Monte Carlo photon-transport code is used to calculate the
Klaus Tatsch; Johannes Schwarz; P. David Mozley; Rainer Linke; Oliver Pogarell; Wolfgang H. Oertel; Ruth S. Fieber; Klaus Hahn; Hank F. Kung
1997-01-01
IPT [N-(3-iodopropen-2-yl)-2#-carbomethoxy-3#-(4-chlorophenyl) tropane] is a new cocain analogue which allows the presynaptic dopamine transporters to be imaged with single-photon emission tomography (SPET) as early as 1-2rh post injection. In the present study [123I]IPT SPET was performed in patients with Parkinson's disease (PD) to analyse the relationship between specific dopamine tansporter binding and clinical features of the disease. Twenty-six PD patients
Monte Carlo Application ToolKit (MCATK)
NASA Astrophysics Data System (ADS)
Adams, Terry; Nolen, Steve; Sweezy, Jeremy; Zukaitis, Anthony; Campbell, Joann; Goorley, Tim; Greene, Simon; Aulwes, Rob
2014-06-01
The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results.
Multivariate Monte Carlo Model Fitting
NASA Astrophysics Data System (ADS)
Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.
2000-05-01
We present a new method for analyzing multi-dimensional data. The method uses an astrophysical and instrument response Monte Carlo to simulate photons and then iteratively analyze the data. The simulated photons are then compared directly with the measured values for the data with a new multivariate generalization of the Cramér-von Mises and Kolmogorov-Smirnov statistic. Techniques for model fitting, error estimation, and deconvolution using this method are discussed. Examples of this approach using Chandra observations of X-ray clusters of galaxies and XMM-Newton Reflection Grating Spectrometer data are presented.
Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.
2009-01-01
Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George
2014-01-01
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm performed inferior to the original one. ARCHERRT achieves a fast speed for PSF-based dose calculations. With a single M2090 card, the simulations cost about 60, 50, 80 s for three cases, respectively, with the 1% statistical error in the PTV. Using the latest K40 card, the simulations are 1.7–1.8 times faster. More impressively, six M2090 cards could finish the simulations in 8.9–13.4 s. For comparison, the same simulations on Intel E5-2620 (12 hyperthreading) cost about 500–800 s. Conclusions: ARCHERRT was developed successfully to perform fast and accurate MC dose calculation for radiotherapy using PSFs and patient CT phantoms. PMID:24989378
Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code
Trumbull, T. H. [Knolls Atomic Power Laboratory, PO Box 1072, Schenectady, NY 12301 (United States)
2013-07-01
With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)
Monte Carlo methods Sequential Monte Carlo
Doucet, Arnaud
Monte Carlo methods Sequential Monte Carlo A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 1 / 85 #12;Generic Problem Consider a sequence of probability distributions, Fn = Fn 1 F. A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 2 / 85 #12;Generic Problem
NASA Astrophysics Data System (ADS)
Maidana, N. L.; Brualla, L.; Vanin, V. R.; Oliveira, J. R. B.; Rizzutto, M. A.; do Nascimento, E.; Fernández-Varea, J. M.
2010-04-01
The triple- and quadruple-escape peaks of 6.128 MeV photons from the F(p,??)19O16 nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code PENELOPE. The behaviour of the escape intensities was simulated for some gamma-ray energies and detector dimensions; the results obtained can be extended to other energies using an empirical function and statistical properties related to the phenomenon.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.
Densmore, J. D. (Jeffery D.); Urbatsch, T. J. (Todd J.); Evans, T. M. (Thomas M.); Buksas, M. W. (Michael W.)
2005-01-01
Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Omigawa, Yu; Yamamoto, Naokatsu; Kanno, Atsushi; Kawanishi, Tetsuya; Kurata, Yasuaki; Sotobayashi, Hideyuki
2012-07-01
Polarization division multiplexing (PDM) and wavelength division multiplexing (WDM) are essential techniques for enhancing the capacity of photonic networks and facilitating the efficient use of optical frequency resources. 2 PDM × 2 WDM × 10 Gbps error-free simultaneous transmissions in the 1.0-µm waveband and C-waveband are successfully demonstrated for the first time using an ultra-broadband photonic transport system over a 14.4-km-long holey fiber transmission line. PMID:22772181
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Burns, Kimberly A.
2009-08-01
The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.
NASA Technical Reports Server (NTRS)
1976-01-01
The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.
New model for dwelling dose calculation using Monte Carlo integration.
Allam, K A
2009-02-01
A new methodology and computer model using Monte Carlo simulation for indoor dose calculation are developed. A room model of six rectangular slabs of finite thickness with door or window in each slab was used. Point-kernel photon transport model with self-absorption correction was applied for dose calculations. New software was designed and programmed using Pascal programming language, which was evaluated for standard room design. The calculated dose due to natural radionuclides in the concert walls has differences from the average model results of 0.21% for (238)U, 12.3% for (232)Th and 13.9% for (40)K; and the variability of specific dose rate with changing position density and composition of walls was studied. The new model has more flexibility for real dose calculation of any room structure and tailing, which is not given in the published models. PMID:19287012
Monte Carlo simulation of coherent effects in multiple scattering
Igor V. Meglinski; Vladimir L. Kuzmin; Dmitry Y. Churmakov
2005-01-01
Using a combination of the stochastic Monte Carlo technique and the iteration procedure of the solution to the Bethe-Salpeter equation, it has been shown that the simulation of the optical path of a photon packet undergoing an nth scattering event directly corresponds to the nth-order ladder diagram contribution. In this paper, the Monte Carlo technique is generalized for the simulation
McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations
Shultis, J.K.; Faw, R.E.; Stedry, M.H. [Kansas State Univ., Manhattan, KS (United States). Dept. of Nuclear Engineering; Hall, W. [Kansas State Univ., Manhattan, KS (United States)
1994-07-01
McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detector locations near the source.
Analytical model of the binary multileaf collimator of tomotherapy for Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Sterpin, E.; Salvat, F.; Olivera, G. H.; Vynckier, S.
2008-02-01
Helical Tomotherapy (HT) delivers intensity-modulated radiotherapy by the means of many configurations of the binary multi-leaf collimator (MLC). The aim of the present study was to devise a method, which we call the 'transfer function' (TF) method, to perform the transport of particles through the MLC much faster than the time consuming Monte Carlo (MC) simulation and with no significant loss of accuracy. The TF method consists of calculating, for each photon in the phase-space file, the attenuation factor for each leaf (up to three) that the photon passes, assuming straight propagation through closed leaves, and storing these factors in a modified phase-space file. To account for the transport through the MLC in a given configuration, the weight of a photon is simply multiplied by the attenuation factors of the leaves that are intersected by the photon ray and are closed. The TF method was combined with the PENELOPE MC code, and validated with measurements for the three static field sizes available (40×5, 40×2.5 and 40×1 cm2) and for some MLC patterns. The TF method allows a large reduction in computation time, without introducing appreciable deviations from the result of full MC simulations.
NASA Astrophysics Data System (ADS)
Bousis, C.; Emfietzoglou, D.; Hadjidoukas, P.; Nikjoo, H.
2008-07-01
Monte Carlo transport calculations of dose point kernels (DPKs) and depth dose profiles (DDPs) in both the vapor and liquid phases of water are presented for electrons with initial energy between 10 keV and 1 MeV. The results are obtained by the MC4 code using three different implementations of the condensed-history technique for inelastic collisions, namely the continuous slowing down approximation, the mixed-simulation with ?-ray transport and the addition of straggling distributions for soft collisions derived from accurate relativistic Born cross sections. In all schemes, elastic collisions are simulated individually based on single-scattering cross sections. Electron transport below 10 keV is performed in an event-by-event mode. Differences on inelastic interactions between the vapor and liquid phase are treated explicitly using our recently developed dielectric response function which is supplemented by relativistic corrections and the transverse contribution. On the whole, the interaction coefficients used agree to better than ~5% with NIST/ICRU values. It is shown that condensed phase effects in both DPKs and DDPs practically vanish above 100 keV. The effect of ?-rays, although decreases with energy, is sizeable leading to more diffused distributions, especially for DPKs. The addition of straggling for soft collisions is practically inconsequential above a few hundred keV. An extensive benchmarking with other condensed-history codes is provided.
Muzafarova, S. A., E-mail: samusu@rambler.ru; Mirsagatov, S. A., E-mail: mirsagatov@rambler.ru; Dzhamalov, F. N. [Academy of Sciences of the Republic of Uzbekistan, Physicotechnical Institute, Researh-and-Production Association Sun Physics (Uzbekistan)
2009-02-15
Effect of irradiation with {gamma}-ray photons on the mechanism of charge transport in an n-CdS/p-CdTe heterostructure is considered. It is shown that the forward current-voltage characteristic of an n-CdS/p-CdTe heterostructure before and after irradiation is described by two exponential dependences: I = I{sub 01}exp(qV/C{sub 01}kT) and I = I{sub 02}exp(qV/C{sub 02}kT). It is found that, in the first portion of the current-voltage characteristic, the current is limited by thermoelectronic emission while, in the second portion, the current is limited by recombination of nonequilibrium charge carriers in the electrically neutral portion of a CdTe{sub 1-x}S{sub x} alloy at the n-CdS/p-CdTe heteroboundary. Anomalous dose dependences of parameters of the n-CdS/p-CdTe heterosystem are attributed to a variation in the degree of compensation of local centers at the CdS-CdTe{sub 1-x}S{sub x} interface and in the CdTe{sub 1-x}S{sub x} layers in relation to the dose of irradiation with {gamma}-ray photons.
NASA Astrophysics Data System (ADS)
Androsenko, P. A.; Kolganov, K. M.; Mogulyan, V. G.
2012-12-01
An approach to estimating the uncertainty of initial data in calculations by the Monte Carlo method is considered. The relative geometrical position of parts of the analyzed system is assumed to be unknown. The influence of different approximations in the description of the geometrical shape of system objects is studied. The effect of unknown location and approximate shape description of solid radioactive waste in the container on the magnitude of dose fields is considered for photon transport problems.
Androsenko, P. A.; Kolganov, K. M., E-mail: smilodonam@yandex.ru; Mogulyan, V. G. [National Research Nuclear University MEPhI, Obninsk Institute for Nuclear Power Engineering (Russian Federation)
2012-12-15
An approach to estimating the uncertainty of initial data in calculations by the Monte Carlo method is considered. The relative geometrical position of parts of the analyzed system is assumed to be unknown. The influence of different approximations in the description of the geometrical shape of system objects is studied. The effect of unknown location and approximate shape description of solid radioactive waste in the container on the magnitude of dose fields is considered for photon transport problems.
Abramov, B M; Borodin, Yu A; Bulychjov, S A; Dukhovskoy, I A; Krutenkova, A P; Kulikov, V V; Martemianov, M A; Matsyuk, M A; Turdakina, E N; Khanov, A I; Mashnik, S G
2015-01-01
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
Monte Carlo simulation for radiative kaon decays
C. Gatti
2005-07-26
For high precision measurements of K decays, the presence of radiated photons cannot be neglected. The Monte Carlo simulations must include the radiative corrections in order to compute the correct event counting and efficiency calculations. In this paper we briefly describe a method for simulating such decays.
Monte Carlo analysis of CLAS data
L. Del Debbio; A. Guffanti; A. Piccione
2008-06-30
We present a fit of the virtual-photon scattering asymmetry of polarized Deep Inelastic Scattering which combines a Monte Carlo technique with the use of a redundant parametrization based on Neural Networks. We apply the result to the analysis of CLAS data on a polarized proton target.
De Biase, Pablo M; Markosyan, Suren; Noskov, Sergei
2015-02-01
The transport of ions and solutes by biological pores is central for cellular processes and has a variety of applications in modern biotechnology. The time scale involved in the polymer transport across a nanopore is beyond the accessibility of conventional MD simulations. Moreover, experimental studies lack sufficient resolution to provide details on the molecular underpinning of the transport mechanisms. BROMOC, the code presented herein, performs Brownian dynamics simulations, both serial and parallel, up to several milliseconds long. BROMOC can be used to model large biological systems. IMC-MACRO software allows for the development of effective potentials for solute-ion interactions based on radial distribution function from all-atom MD. BROMOC Suite also provides a versatile set of tools to do a wide variety of preprocessing and postsimulation analysis. We illustrate a potential application with ion and ssDNA transport in MspA nanopore. PMID:25503688
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
T. M. Sutton; F. B. Brown; F. G. Bischoff; D. B. MacMillan; C. L. Ellis; J. T. Ward; C. T. Ballinger; D. J. Kelly; L. Schindler
1999-01-01
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various
Dietz, D.
1985-04-01
The physics of the three-dimensional, Monte Carlo, general combinatorial geometry High Energy Transport Code (HETC)/Light Heavy Ion (LHI)/Spallation Gamma Source (SGS) radiation transport code system, used to transport neutrons, protons, pions, muons, photons, < or = deuterons, tritons, He3's, alpha particles, and other light heavy nuclei with 5 < or = A < or = 10 through complex materials and geometries, is discussed at an introductory level. The term system is used because HETC/LHI/SGS is really a collection of three and one-half separate codes linked together: (1) HET, which transports high-energy neutrons ( approx. > 15 MeV), protons, pion, and muons; (2) SGS, which calculates the energies of the gamma-ray photons emitted in the final de-excitation of the excited residual nucleus remaining after a non-elastic nucleon-nucleus or pion-nucleus collision in HET (note SGS is not itself a transport code); (3) MORSE, which transports low energy neutrons ( approximately < 15 MeV) and does all photon transport; and (3 1/2) LHI, an addition to HET to allow the partial transport of high-energy light heavy projectiles with 2 < or = A < or = 10. The MORSE code is not discussed.
Dirac tensor with heavy photon
Bytev, V. V.; Kuraev, E. A., E-mail: kuraev@theor.jinr.ru [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics (Russian Federation); Scherbakova, E. S., E-mail: scherbak@mail.desy.de [Hamburg University (Germany)
2013-03-15
For the large-angle hard-photon emission by initial leptons in the process of high-energy annihilation of e{sup +}e{sup -} to hadrons, the Dirac tensor is obtained by taking the lowest-order radiative corrections into account. The case of large-angle emission of two hard photons by initial leptons is considered. In the final result, the kinematic case of collinear emission of hard photons and soft virtual and real photons is included; it can be used for the construction of Monte-Carlo generators.
Yukito Iba
2001-01-01
``Extended Ensemble Monte Carlo'' is a generic term that indicates a set of algorithms, which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo) and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here, we give a cross-disciplinary
Graduiertenschule Hybrid Monte Carlo
Heermann, Dieter W.
Graduiertenschule Hybrid Monte Carlo SS 2005 Heermann - Universit¨at Heidelberg Seite 1 #12;Graduiertenschule · In conventional Monte-Carlo (MC) calculations of condensed matter systems, such as an N probability distribution, unlike Monte-Carlo calculations. · The Hybrid Monte-Carlo (HMC) method combines
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run with little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.
National photonics skills standards for technicians
Darrell M. Hull
1995-01-01
Photonics is defined as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of phonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's communication systems, from the laser that generates the digital information transported along a fiber-
NASA Astrophysics Data System (ADS)
Üpping, J.; Bielawny, A.; Lee, S.; Knez, M.; Carius, R.; Wehrspohn, R. B.
2009-08-01
The progress of 3D photonic intermediate reflectors for micromorph silicon tandem cells towards a first prototype cell is presented. Intermediate reflectors enhance the absorption of spectrally-selected light in the top cell and decrease the current mismatch between both junctions. A numerical method to predict filter properties for optimal current matching is presented. Our device is an inverted opal structure made of ZnO and fabricated using self-organized nanoparticles and atomic layer deposition for conformal coating. In particular, the influence of ZnO-doping and replicated cracks during drying of the opal is discussed with respect to conductivity and optical properties. A first prototype is compared to a state-of-the-art reference cell.
National Photonics Skills Standard for Technicians.
ERIC Educational Resources Information Center
Center for Occupational Research and Development, Inc., Waco, TX.
This document defines "photonics" as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of photonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's communication…
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
Burke, D.L.
1982-10-01
Studies of photon-photon collisions are reviewed with particular emphasis on new results reported to this conference. These include results on light meson spectroscopy and deep inelastic e..gamma.. scattering. Considerable work has now been accumulated on resonance production by ..gamma gamma.. collisions. Preliminary high statistics studies of the photon structure function F/sub 2//sup ..gamma../(x,Q/sup 2/) are given and comments are made on the problems that remain to be solved.
Ex Post Facto Monte Carlo Variance Reduction
Booth, Thomas E. [Los Alamos National Laboratory (United States)
2004-11-15
The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance in a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.
NASA Technical Reports Server (NTRS)
Berger, M. J.; Seltzer, S. M.; Maeda, K.
1972-01-01
The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.
NASA Astrophysics Data System (ADS)
Jarry, Geneviève; Verhaegen, Frank
2007-04-01
Electronic portal imagers have promising dosimetric applications in external beam radiation therapy. In this study a patient dose computation algorithm based on Monte Carlo (MC) simulations and on portal images is developed and validated. The patient exit fluence from primary photons is obtained from the portal image after correction for scattered radiation. The scattered radiation at the portal imager and the spectral energy distribution of the primary photons are estimated from MC simulations at the treatment planning stage. The patient exit fluence and the spectral energy distribution of the primary photons are then used to ray-trace the photons from the portal image towards the source through the CT geometry of the patient. Photon weights which reflect the probability of a photon being transmitted are computed during this step. A dedicated MC code is used to transport back these photons from the source through the patient CT geometry to obtain patient dose. Only Compton interactions are considered. This code also produces a reconstructed portal image which is used as a verification tool to ensure that the dose reconstruction is reliable. The dose reconstruction algorithm is compared against MC dose calculation (MCDC) predictions and against measurements in phantom. The reconstructed absolute absorbed doses and the MCDC predictions in homogeneous and heterogeneous phantoms agree within 3% for simple open fields. Comparison with film-measured relative dose distributions for IMRT fields yields agreement within 3 mm, 5%. This novel dose reconstruction algorithm allows for daily patient-specific dosimetry and verification of patient movement.
Jarry, Geneviève; Verhaegen, Frank
2007-04-21
Electronic portal imagers have promising dosimetric applications in external beam radiation therapy. In this study a patient dose computation algorithm based on Monte Carlo (MC) simulations and on portal images is developed and validated. The patient exit fluence from primary photons is obtained from the portal image after correction for scattered radiation. The scattered radiation at the portal imager and the spectral energy distribution of the primary photons are estimated from MC simulations at the treatment planning stage. The patient exit fluence and the spectral energy distribution of the primary photons are then used to ray-trace the photons from the portal image towards the source through the CT geometry of the patient. Photon weights which reflect the probability of a photon being transmitted are computed during this step. A dedicated MC code is used to transport back these photons from the source through the patient CT geometry to obtain patient dose. Only Compton interactions are considered. This code also produces a reconstructed portal image which is used as a verification tool to ensure that the dose reconstruction is reliable. The dose reconstruction algorithm is compared against MC dose calculation (MCDC) predictions and against measurements in phantom. The reconstructed absolute absorbed doses and the MCDC predictions in homogeneous and heterogeneous phantoms agree within 3% for simple open fields. Comparison with film-measured relative dose distributions for IMRT fields yields agreement within 3 mm, 5%. This novel dose reconstruction algorithm allows for daily patient-specific dosimetry and verification of patient movement. PMID:17404469
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.
Validation of GATE Monte Carlo simulations of the GE Advance\\/Discovery LS PET scanners
C. Ross Schmidtlein; Assen S. Kirov; Sadek A. Nehmeh; Yusuf E. Erdi; John L. Humm; Howard I. Amols; Luc M. Bidautb; Alex Ganin; Charles W. Stearns; David L. McDaniel; Klaus A. Hamacher
2006-01-01
The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of
Monte Carlo Simulation of Secondary Fluorescence using a New Graphical Interface for PENELOPE
NASA Astrophysics Data System (ADS)
Pinard, P. T.; Demers, H.; Llovet, X.; Gauvin, R.; Salvat, F.
2011-12-01
Secondary fluorescence is not a negligible factor in the chemical concentration measurement of many minerals (quartz, olivine, etc.) using the electron probe microanalysis (EPMA) technique (Llovet and Galán, 2003). The importance of this phenomenon depends on the chemical species present in the mineral but also, in case of heterogeneous samples, on their relative location to the measurement position. Monte Carlo codes are useful tools to select the optimal measurement conditions as well as to correct afterwards the results for phenomenon such as secondary fluorescence. PENELOPE (Salvat et al., 2011) is a Fortran Monte Carlo code for simulation of coupled electron-photon transport in matter that allows a detailed interpretation of experimental results of electron spectroscopy and microscopy. PENEPMA is a dedicated main program of PENELOPE designed to perform simulations with the same parameters as in actual EPMA measurements. Complex geometries can be defined to emulate the internal structure of a sample. Photon interactions are simulated in chronological succession, therefore allowing the calculation of secondary fluorescence. These features combined with the use of the most reliable physical interaction models make PENEPMA a unique Monte Carlo code for EPMA analysis. However, the original version of PENEPMA had a steep learning curve as it required the user to manually create several input files to run a single simulation. To facilitate the use of the code, a graphical interface was recently developed. Written in the cross-platform programming language Python, it simplifies the setup of simulations and the analysis of the results. It also includes optimized simulation parameters which increases the efficiency of the simulations (i.e. reduces the computation time) by a factor of up to 8. In this communication, we describe the structure and capabilities of this graphical interface. It not only eases the definition of the problem, but also provides more extensive information than the original PENEPMA code. Several calculation results are made available to the users to evaluate the effect of secondary fluorescence generated by either characteristic or continuum x-rays. For instance, the contribution of secondary fluorescence to the total intensity is calculated for each x-ray line and detector. To evaluate the extent of secondary fluorescence generated by neighboring phases, depth and radial distribution are recorded or line scans can be performed over a phase boundary or across a particle. X. Llovet and G. Galán (2003). Correction for secondary X-ray fluorescence near grain boundaries in electron microprobe analysis: application to thermobarometry of spinel lherzolites. American Mineralogist, 88, 121-130. F. Salvat, J.M. Fernández-Varea and J. Sempau (2011). PENELOPE-2011: A code System for Monte Carlo Simulation of Electron and Photon Transport. (OECD Nuclear Energy Agency, Issy-les-Moulineaux, France).
Improvement of Photon Buildup Factors for Radiological Assessment
F.G. Schirmers
2006-07-01
Slant-path buildup factors for photons between 1 keV and 10 MeV for nine radiation shielding materials (air, aluminum, concrete, iron, lead, leaded glass, polyethylene, stainless steel, and water) are calculated with the most recent cross-section data available using Monte Carlo and discrete ordinates methods. Discrete ordinates calculations use a 244-group energy structure that is based on previous research at Los Alamos National Laboratory (LANL), but extended with the results of this thesis, and its focused studies on low-energy photon transport and the effects of group widths in multigroup calculations. Buildup factor calculations in discrete ordinates benefit from coupled photon/electron cross sections to account for secondary photon effects. Also, ambient dose equivalent (herein referred to as dose) buildup factors were analyzed at lower energies where corresponding response functions do not exist in literature. The results of these studies are directly applicable to radiation safety at LANL, where the dose modeling tool Pandemonium is used to estimate worker dose in plutonium handling facilities. Buildup factors determined in this thesis will be used to enhance the code's modeling capabilities, but should be of interest to the radiation shielding community.
SANDYL radiation transport code at the Naval Research Laboratory
NASA Astrophysics Data System (ADS)
Langworthy, J. B.
1984-07-01
SANDYL is a Monte Carlo photon and electron transport code used mainly to calculate dose in any material. This report is a history of the translation and modification of the local version, which is being run on the Texas Instrument Advanced Scientific Computer, and the report details how this version differs from the original versions. Together with original documentation, it will serve as a user's manual. Use of codes like SANDYL or of equivalent more expensive, experimental information is necessary in radiation vulnerability studies and hardness design work on Navy satellites. The special utility of SANDYL lies in its ability to handle rather general geometries while retaining a close connection with physical mechanism.
A Theory of Supply Chains Carlos F. Daganzo
Daganzo, Carlos F.
A Theory of Supply Chains Carlos F. Daganzo Institute of Transportation Studies and Department by Carlos F. Daganzo #12;VI A Theory of Supply Chains PREFACE This work was stimulated by a comment made numerical methods being two nota- ble examples) I suspected that traffic flow theory might shed some light
Diffractive production of isolated photons at HERA
Peter Bussey; for the ZEUS Collaboration
2015-07-14
The ZEUS detector at HERA has been used to measure the photoproduction of isolated photons in diffractive events. Cross sections are evaluated in the photon transverse-energy and pseudorapidity ranges 5 energy an pseudorapidity in the ranges 4 energy and of the colourless exchange ("Pomeron") energy that are imparted to a photon-jet final state. Comparison is made to predictions from the RAPGAP Monte Carlo simulation.
Dibyendu Roy
2010-07-13
We propose a novel scheme of realizing an optical diode at the few-photon level. The system consists of a one-dimensional waveguide coupled asymmetrically to a two-level system. The two or multi-photon transport in this system is strongly correlated. We derive exactly the single and two-photon current and show that the two-photon current is asymmetric for the asymmetric coupling. Thus the system serves as an optical diode which allows transmission of photons in one direction much more efficiently than the opposite.
Roy, Dibyendu
2010-01-01
We propose a novel scheme of realizing an optical diode at the few-photon level. The system consists of a one-dimensional waveguide coupled asymmetrically to a two-level system. The two or multi-photon transport in this system is strongly correlated. We derive exactly the single and two-photon current and show that the two-photon current is asymmetric for the asymmetric coupling. Thus the system serves as an optical diode which allows transmission of photons in one direction much more efficiently than the opposite.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
NASA Astrophysics Data System (ADS)
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION
A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Monte Carlo Simulations of Thermal Conductivity in Nanoporous Si Membranes
; Boltzmann-Transport-Equation; Monte-Carlo; nanoporous silicon; nanomesh; thermoelectrics Introduction candidates for thermoelectric materials as they can provide extremely low thermal conductivity , relatively high thermoelectric power factors, and the structure stability that other low-dimensional systems
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Hybrid Radiosity\\/Monte Carlo Methods
Peter Shirley
1994-01-01
this document said that absorb and reemit wasasymptotically equivalent to the photon tracking model.8 Hybrid Radiosity\\/Monte Carlo MethodsRadiositySolutionGather fromsolution for smallarea zonesFinal (corrected)solutionFigure 4: Zones with small areas have their radiance recalculated more accurately in a postprocess.iteration (each ray carries approximately the same amount of power). The other is that, unlike in[7], the zone with the most power is not
Monte Carlo simulations of the Galileo energetic particle detector
I Jun; J. M Ratliff; H. B Garrett; R. W McEntire
2002-01-01
Monte Carlo radiation transport studies have been performed for the Galileo spacecraft energetic particle detector (EPD) in order to study its response to energetic electrons and protons. Three-dimensional Monte Carlo radiation transport codes, MCNP version 4B (for electrons) and MCNPX version 2.2.3 (for protons), were used throughout the study. The results are presented in the form of “geometric factors” for
The new VARSKIN 4 photon skin dosimetry model.
Hamby, D M; Lodwick, C J; Palmer, T S; Reese, S R; Higley, K A; Caffrey, J A; Sherbini, S; Saba, M; Bush-Goddard, S P
2013-01-01
A new photon skin dosimetry model, described here, was developed as the basis for the enhanced VARSKIN 4 thin tissue dosimetry code. The model employs a point-kernel method that accounts for charged particle build-up, photon attenuation and off-axis scatter. Early comparisons of the new model against Monte Carlo particle transport simulations show that VARSKIN 4 is highly accurate for very small sources on the skin surface, although accuracy at shallow depths is compromised for radiation sources that are on clothing or otherwise elevated from the skin surface. Comparison results are provided for a one-dimensional point source, a two-dimensional disc source and three-dimensional sphere, cylinder and slab sources. For very small source dimensions and sources in contact with the skin, comparisons reveal that the model is highly predictive. With larger source dimensions, air gaps or the addition of clothing between the source and skin; however, VARSKIN 4 yields over-predictions of dose by as much as a factor of 2 to 3. These cursory Monte Carlo comparisons confirm that significant accuracy improvements beyond the previous version were achieved for all geometries. Improvements were obtained while retaining the VARSKIN characteristic user convenience and rapid performance. PMID:23070483
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric
2014-01-01
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 ? 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 107 reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 106 CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities. PMID:24593734
Monte Carlo simulation of coherent effects in mulitple scattering
Igor V. Meglinski; V. L. Kuzmin; Dmitry Y. Churmakov; Douglas A. Greenhalgh
2004-01-01
Based on the collation of the stochastic Monte Carlo technique and the iteration procedure of the solution of Bethe-Salpeter equation, it is shown that simulation of optical path of photons undergoing n-th scattering event is directly agreed with the n-th order ladder diagram calculation approach. In frame of this correspondence the Monte Carlo technique is generalised for simulation of coherent
Menut, Laurent
Bayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive. The inversion method is based on Bayesian Monte Carlo analysis applied to a regional-scale chemistry transport are attributed to individual Monte Carlo simulations by comparing them with observations from the AIRPARIF
Monte Carlo simulation of solid-state thermionic energy conversion devices based on non-planar
Monte Carlo simulation of solid-state thermionic energy conversion devices based on non structures is analyzed using a Monte Carlo electron transport model. Compared to the planar structures, about a chance to pass over the barrier in a triangle region. 2 Monte Carlo Algorithms We used a simplified
A deterministic electrons and protons transport suite for the study of the Jovian system
NASA Astrophysics Data System (ADS)
Badavi, Francis; Nealy, John; Norman, Ryan
Langley Research Center (LaRC) developed deterministic suite of transport codes for describing the transport of electrons, photons and protons in condensed media is used to simulate the effects and exposures from spectral distributions typical of electrons and protons trapped in planetary magnetic fields. The suite is made of a coupled electrons/ photons deterministic transport procedure (CEPTRN) and a light/heavy ions deterministic transport procedure (HZETRN). The primary purpose for the development of the transport suite is to provide a means for rapidly forming numerous repetitive calculations essential for electrons/protons radiation exposure assessments of complex space structures. Several favorable comparisons have been made with statistically oriented Monte Carlo calculations for typical space environment spectra which have indicated that the transport accuracy has not been compromised at the expense of computational speed. For this presentation the radiation environments of the Galilean satellites Io, Europa, Ganymede and Callisto are used as representative boundary conditions to show the capabilities of the transport suite. The Jovian radiation environment is simulated using the Jet Propulsion Lab. (JPL) GIRE model of 2003. For a limited number of candidate shielding materials, the GIRE produced electrons/protons environments are used as boundary condition to the CEPTRN and HZETRN transport suite to evaluate the particle flux and dose due to electrons and protons at various distances from the planet as a function of latitude, longitude, and altitude.
MONTE CARLO EXTENSION OF QUASIMONTE CARLO Art B. Owen
Owen, Art
MONTE CARLO EXTENSION OF QUASIMONTE CARLO Art B. Owen Department of Statistics Stanford University Stanford CA 94305, U.S.A. ABSTRACT This paper surveys recent research on using Monte Carlo techniques to improve quasiMonte Carlo tech niques. Randomized quasiMonte Carlo methods pro vide a basis for error
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.
Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O
2015-04-01
Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry. PMID:25968708
Picosecond response of photoexcited GaAs in a uniform electric field by Monte Carlo dynamics
Wysin, G.M.; Smith, D.L.; Redondo, A.
1988-12-15
The transient electrical response of GaAs photoexcited by a subpicosecond pulse, in the presence of a uniform biasing electric field, has been studied with use of a Monte Carlo calculation. Noninteracting electron transport is considered, using the three-valley model for the conduction band. Scattering from acoustic, optical, and intervalley phonons is included. The valence-band dispersion relations and valence- to conduction-band momentum matrix elements needed to treat the optical absorption were obtained from a full-zone kxp calculation. The optical absorption has been given a realistic treatment by including an effective energy linewidth resulting from the combination of the Fourier transform of the driving pulse, electron-phonon scattering, and the effect of the applied electric field. The average electron velocity is found to overshoot its steady-state value only if the electric field is larger than a critical value which increases with the photon energy. For example, these calculations indicate that at 5.0 kV/cm overshoot occurs for a photon energy of 1.5 eV but not for 1.7 eV. Velocity overshoot is seen to occur when the steady-state average electron energy (for the given applied field) is larger than the average electron energy of the initial photoexcited distribution. The regime of applied field and photon energy necessary for overshoot is mapped out.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
NASA Astrophysics Data System (ADS)
Slaba, Tony C.; Blattnig, Steve R.; Reddell, Brandon; Bahadori, Amir; Norman, Ryan B.; Badavi, Francis F.
2013-07-01
Recent work has indicated that pion production and the associated electromagnetic (EM) cascade may be an important contribution to the total astronaut exposure in space. Recent extensions to the deterministic space radiation transport code, HZETRN, allow the production and transport of pions, muons, electrons, positrons, and photons. In this paper, the extended code is compared to the Monte Carlo codes, Geant4, PHITS, and FLUKA, in slab geometries exposed to galactic cosmic ray (GCR) boundary conditions. While improvements in the HZETRN transport formalism for the new particles are needed, it is shown that reasonable agreement on dose is found at larger shielding thicknesses commonly found on the International Space Station (ISS). Finally, the extended code is compared to ISS data on a minute-by-minute basis over a seven day period in 2001. The impact of pion/EM production on exposure estimates and validation results is clearly shown. The Badhwar-O'Neill (BO) 2004 and 2010 models are used to generate the GCR boundary condition at each time-step allowing the impact of environmental model improvements on validation results to be quantified as well. It is found that the updated BO2010 model noticeably reduces overall exposure estimates from the BO2004 model, and the additional production mechanisms in HZETRN provide some compensation. It is shown that the overestimates provided by the BO2004 GCR model in previous validation studies led to deflated uncertainty estimates for environmental, physics, and transport models, and allowed an important physical interaction (?/EM) to be overlooked in model development. Despite the additional ?/EM production mechanisms in HZETRN, a systematic under-prediction of total dose is observed in comparison to Monte Carlo results and measured data.
Roy, Dibyendu [Department of Physics, University of California-San Diego, La Jolla, California 92093-0319 (United States)
2010-04-15
We propose a scheme of realizing an optical diode at the few-photon level. The system consists of a one-dimensional waveguide coupled asymmetrically to a two-level system. The two or multiphoton transport in this system is strongly correlated. We derive exactly the single and two-photon current and show that the two-photon current is asymmetric for the asymmetric coupling. Thus the system serves as an optical diode which allows transmission of photons in one direction much more efficiently than the opposite.
Thomas C. Henderson; Brandt Erickson; Travis Longoria; Edward Grant; Kyle Luthy; Leonardo Mattos; Matt Craver
2005-01-01
Biswas et al. (1) introduced a probabilistic approach to inference with limited information in sensor networks. They represented the sensor network as a Bayesian network and performed approximate inference using Markov Chain Monte Carlo (MCMC). The goal is to robustly answer queries even under noisy or partial information scenarios. We propose an alter- native method based on simple Monte Carlo
A comparison of speeds of personal computers using an x-ray scattering Monte Carlo benchmark
I. A. M. Al-Affan
1996-01-01
In recent years personal computers, PCs, have become more popular and competitive with other computers, in terms of price and memory, to run Monte Carlo (MC) programmes. A few years ago some work was performed to test computer speed, mainly other than PCs, using the MC code EGS4. In the present work a Monte Carlo neutron and photon code, MCNP
Viviana Fanti; Roberto Marzeddu; Callisto Pili; Paolo Randaccio; Sabyasachi Siddhanta; Jenny Spiga; Artur Szostak
2009-01-01
This work describes a fast Monte Carlo Machine for dose calculation in radiotherapy treatment planning on FPGA based hardware. When performing Monte Carlo simulations of the radiation dose delivered to the human body, the Compton interaction is simulated. The inputs to the system are the energy and the normalized direction vectors of the incoming photon. The energy and the direction
Monte-Carlo Tests Diplomarbeit
Monte-Carlo Tests Diplomarbeit Wiebke Werft Mathematisches Institut der Heinrich.2 Suffizienz und Vollständigkeit . . . . . . . . . . . . . . . . . . . . 5 2 Monte-Carlo Tests 8 2.1 Formulierung des Testproblems . . . . . . . . . . . . . . . . . . . 8 2.2 Definition des Monte-Carlo Tests
Numerical reproducibility for implicit Monte Carlo simulations
Cleveland, M.; Brunner, T.; Gentile, N. [Lawrence Livermore National Laboratory, P. O. Box 808, Livermore CA 94550 (United States)
2013-07-01
We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)
Bohm, Tim D.; Micka, John A.; De Werd, Larry A. [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)
2007-04-15
The determination of the air kerma strength of a brachytherapy seed is necessary for effective treatment planning. Well-type ionization chambers are used on site at therapy clinics to determine the air kerma strength of seeds. In this work, an improved well-type ionization chamber for low energy, low dose rate brachytherapy sources is designed using Monte Carlo transport calculations to aid in the design process. The design improvements are the elimination of the air density induced over-response effect seen in other air-communicating chambers for low energy photon sources, and a larger signal strength (response or current) for {sup 103}Pd and {sup 125}I based seeds. A prototype well chamber based on the Monte Carlo aided design but using graphite coated acrylic walls rather than the design basis air equivalent plastic (C-552) walls was constructed and experimentally evaluated. The prototype chamber produced an 85% stronger signal when measuring a commonly used {sup 103}Pd seed and a 26% stronger signal when measuring a commonly used {sup 125}I seed when compared to another commonly used well chamber. The normalized P{sub TP} corrected chamber response is, at most, 1.3% and 2.4% over unity for air densities/pressures corresponding to an elevation of 3048 m (10 000 feet) above sea level for the commonly used {sup 103}Pd and {sup 125}I based seeds respectively. Comparing calculated and measured chamber responses for common {sup 103}Pd and {sup 125}I based brachytherapy seeds show agreement within 0.6% and 0.2%, respectively. We conclude that Monte Carlo transport calculations accurately model the response of this new well chamber and in general can be used to predict the response of well chambers. The prototype chamber built in this work responds as predicted by the Monte Carlo calculations.
Bohm, Tim D; Micka, John A; DeWerd, Larry A
2007-04-01
The determination of the air kerma strength of a brachytherapy seed is necessary for effective treatment planning. Well-type ionization chambers are used on site at therapy clinics to determine the air kerma strength of seeds. In this work, an improved well-type ionization chamber for low energy, low dose rate brachytherapy sources is designed using Monte Carlo transport calculations to aid in the design process. The design improvements are the elimination of the air density induced over-response effect seen in other air-communicating chambers for low energy photon sources, and a larger signal strength (response or current) for 103Pd and 125I based seeds. A prototype well chamber based on the Monte Carlo aided design but using graphite coated acrylic walls rather than the design basis air equivalent plastic (C-552) walls was constructed and experimentally evaluated. The prototype chamber produced an 85% stronger signal when measuring a commonly used 103Pd seed and a 26% stronger signal when measuring a commonly used 125I seed when compared to another commonly used well chamber. The normalized PTP corrected chamber response is, at most, 1.3% and 2.4% over unity for air densities/pressures corresponding to an elevation of 3048 m (10000 feet) above sea level for the commonly used 103Pd and 125I based seeds respectively. Comparing calculated and measured chamber responses for common 103Pd and 125I based brachytherapy seeds show agreement within 0.6% and 0.2%, respectively. We conclude that Monte Carlo transport calculations accurately model the response of this new well chamber and in general can be used to predict the response of well chambers. The prototype chamber built in this work responds as predicted by the Monte Carlo calculations. PMID:17500459
Lloyd, S. A. M.; Ansbacher, W. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada); Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada) and Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Centre, Victoria, British Columbia V8R 6V5 (Canada)
2013-01-15
Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are used to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.
APS undulator and wiggler sources: Monte-Carlo simulation
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Monte Carlo Radiation Analysis of a Spacecraft Radioisotope Power System
NASA Technical Reports Server (NTRS)
Wallace, M.
1994-01-01
A Monte Carlo statistical computer analysis was used to create neutron and photon radiation predictions for the General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS RTG). The GPHS RTG is being used on several NASA planetary missions. Analytical results were validated using measured health physics data.
Photonic topological insulators
NASA Astrophysics Data System (ADS)
Khanikaev, Alexander B.; Hossein Mousavi, S.; Tse, Wang-Kong; Kargarian, Mehdi; MacDonald, Allan H.; Shvets, Gennady
2013-03-01
Recent progress in understanding the topological properties of condensed matter has led to the discovery of time-reversal-invariant topological insulators. A remarkable and useful property of these materials is that they support unidirectional spin-polarized propagation at their surfaces. Unfortunately topological insulators are rare among solid-state materials. Using suitably designed electromagnetic media (metamaterials) we theoretically demonstrate a photonic analogue of a topological insulator. We show that metacrystals—superlattices of metamaterials with judiciously designed properties—provide a platform for designing topologically non-trivial photonic states, similar to those that have been identified for condensed-matter topological insulators. The interfaces of the metacrystals support helical edge states that exhibit spin-polarized one-way propagation of photons, robust against disorder. Our results demonstrate the possibility of attaining one-way photon transport without application of external magnetic fields or breaking of time-reversal symmetry. Such spin-polarized one-way transport enables exotic spin-cloaked photon sources that do not obscure each other.
Noritaka Usami; Wugen Pan; Takeshi Tayagaki; Sai Tak Chu; Jensen Li; Tianhua Feng; Yusuke Hoshi; Takanori Kiguchi
2012-01-01
We propose a novel solar cell structure with photonic nanocrystals coupled to quantum dots (QDs) for advanced management of photons and carriers. The photonic nanocrystals at the surface create an extra interaction between the photons and the QDs, which promotes light trapping. Photo-generated carriers can be efficiently transported by preparing vertically aligned QDs with electronic coupling. Implementation of the proposed
Monte Carlo methods Monte Carlo Principle and MCMC
Doucet, Arnaud
Monte Carlo methods Monte Carlo Principle and MCMC A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 1 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 2 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles 2 Markov
Method Monte Carlo in optical diagnostics of skin and skin tissues
Igor V. Meglinski
2003-01-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical\\/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Doppler flowmetry and
A novel Monte Carlo method for the optical diagnostics of skin
Igor V. Meglinski; Dmitry Y. Churmakov
2003-01-01
A novel Monte Carlo (MC) technique for photon migration through 3D media with the spatially varying optical properties is presented. The employed MC technique combines the statistical weighting variance reduction and real photon paths tracing schemes. The overview of the results of applications of the developed MC technique in optical\\/near-infrared reflectance spectroscopy, confocal microscopy, fluorescence spectroscopy, OCT, Diffusing Wave Spectroscopy
Monte-Carlo simulations of proton aurora
S. a. Synnes; F. Søraas; J. P. Hansen
1998-01-01
The spreading of a proton beam in the upper atmosphere is calculated based onMonte-Carlo simulations. The transport of the atoms is modelled in a magnetic field with dipolestrength. Neuralisation, ionisation and excitation mechanisms of the incoming particles areincluded from collision cross-sections of protons and hydrogen with an effective N2atmosphere. Assuming an isotropic pitch angle distribution for the incoming protons, theirspreading
Applications of Maxent to quantum Monte Carlo
Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)
1990-01-01
We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.
NSDL National Science Digital Library
David Joiner
Monte Carlo modeling refers to the solution of mathematical problems with the use of random numbers. This can include both function integration and the modeling of stochastic phenomena using random processes.
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Direct Photons from a Hybrid Approach -- Exploring the parameter space
Bjoern Baeuchle; Marcus Bleicher
2010-08-11
Direct photon spectra are calculated within a transport+hydrodynamics hybrid approach, in which the high-density part of the transport evolution has been replaced by a 3+1-dimensional hydrodynamic calculation. We study the effects of changing the parameters of the two interfaces between the transport- and hydrodynamic descriptions on the resulting direct photon spectra.
Morton, G A
1968-01-01
The fundamentals of photon counting using photomultipliers are described, including criteria for selecting suitable photomultipliers, some of the precautions that must be taken in using these devices, and methods of calculating the counting errors that may occur under various conditions of measurement. Problems of determining the time distribution of photons and, in particular, the coincident emission of photons which may be encountered in lasers and other simulated emission sources are also discussed. The question of photon counting with photoconductors is reviewed, and it is shown that it is extremely difficult, if not impossible, to achieve photon counting with simple photoconductors. However, carrier multiplication with photoconductive multipliers should eventually make possible photon counting with photoconductors. Photoconductive multipliers in one form or another have high quantum efficiency and wide spectral response, and will almost inevitably replace photomultipliers for photon counting. PMID:20062394
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
NASA Astrophysics Data System (ADS)
Nikolopoulos, Dimitrios; Kandarakis, Ioannis; Tsantilas, Xenophon; Valais, Ioannis; Cavouras, Dionisios; Louizi, Anna
2006-12-01
The radiation detection efficiency of four scintillators employed, or designed to be employed, in positron emission imaging (PET) was evaluated as a function of the crystal thickness by applying Monte Carlo Methods. The scintillators studied were the LuSiO 5 (LSO), LuAlO 3 (LuAP), Gd 2SiO 5 (GSO) and the YAlO 3 (YAP). Crystal thicknesses ranged from 0 to 50 mm. The study was performed via a previously generated photon transport Monte Carlo code. All photon track and energy histories were recorded and the energy transferred or absorbed in the scintillator medium was calculated together with the energy redistributed and retransported as secondary characteristic fluorescence radiation. Various parameters were calculated e.g. the fraction of the incident photon energy absorbed, transmitted or redistributed as fluorescence radiation, the scatter to primary ratio, the photon and energy distribution within each scintillator block etc. As being most significant, the fraction of the incident photon energy absorbed was found to increase with increasing crystal thickness tending to form a plateau above the 30 mm thickness. For LSO, LuAP, GSO and YAP scintillators, respectively, this fraction had the value of 44.8, 36.9 and 45.7% at the 10 mm thickness and 96.4, 93.2 and 96.9% at the 50 mm thickness. Within the plateau area approximately (57-59)%, (59-63)%, (52-63)% and (58-61)% of this fraction was due to scattered and reabsorbed radiation for the LSO, GSO, YAP and LuAP scintillators, respectively. In all cases, a negligible fraction (<0.1%) of the absorbed energy was found to escape the crystal as fluorescence radiation.
Influence of refractive index matching on the photon diffuse reflectance
D Y Churmakov; I V Meglinski; D A Greenhalgh
2002-01-01
Photon migration in a randomly inhomogeneous, highly scattering and absorbing semi-infinite medium with a plane boundary is considered by a Monte Carlo (MC) technique. The employed MC technique combines the statistical weight scheme and real photon paths simulation, allowing the exclusion of the energy conservation problem. The internal reflection of the scattered radiation on the medium interface is taken into
Monte Carlo simulation of Tabata’s electron backscattering experiments
NASA Astrophysics Data System (ADS)
Kirihara, Y.; Namito, Y.; Iwase, H.; Hirayama, H.
2010-08-01
Electron backscattering coefficients, ?, obtained from several targets in the MeV range were calculated by using electron-photon Monte Carlo transport calculation codes, i.e., EGS5 and ITS 3.0. These calculated values were compared with those obtained from the electron backscattering experiment performed by Tabata using an ionization chamber [15]. We found that Tabata's estimation of the multiplication factor of the ionization chamber, f, had a non-negligible error. Then, we calculated the ionization chamber output, I, which is a product of ? and f. The ratios of I between the experimental and the calculated values were within 1.5 and 1.3 for the EGS5 code and the ITS 3.0 code, respectively. The ratios of ? between the experimental and the calculated values were within 2.4 and 1.5 for the EGS5 code and the ITS 3.0 code, respectively. The differences between the experimental and the calculated values of I and ? are large for low- Z targets (Be and C). Here, the ratios obtained by using the ITS 3.0 code are closer to unity than those obtained by using the EGS5 code. The reason of this is the fact that the calculated value obtained by using the ITS 3.0 code is underestimated for low- Z targets; this underestimation can, in turn, be attributed to the use of the default value of the number of steps in the electron transport algorithm in the ITS 3.0 code.
NASA Astrophysics Data System (ADS)
Aspuru-Guzik, Alán; Walther, Philip
2012-04-01
Quantum simulators are controllable quantum systems that can be used to mimic other quantum systems. They have the potential to enable the tackling of problems that are intractable on conventional computers. The photonic quantum technology available today is reaching the stage where significant advantages arise for the simulation of interesting problems in quantum chemistry, quantum biology and solid-state physics. In addition, photonic quantum systems also offer the unique benefit of being mobile over free space and in waveguide structures, which opens new perspectives to the field by enabling the natural investigation of quantum transport phenomena. Here, we review recent progress in the field of photonic quantum simulation, which should break the ground towards the realization of versatile quantum simulators.
Monte Carlo and experimental dosimetry of an {sup 125}I brachytherapy seed
Dolan, James; Li Zuofeng; Williamson, Jeffrey F. [Department of Radiation Oncology, Beth Israel Medical Center, New York, New York 10003 (United States); Department of Radiation Oncology, Shand Hospital, Jacksonville, Florida 32209 (United States); Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2006-12-15
We have performed a comprehensive dosimetric characterization of the Oncura{sup TM} model 6711 {sup 125}I seed using both experimental [LiF thermoluminscent dosimetry (TLD)] and theoretical (Monte Carlo photon transport) methods. In addition to determining the dosimetric parameters of the 6711, this report quantified: (1) the angular dependence of LiF TLD energy response functions for both point and volume detectors in water, poly(methylmethacrylate), and solid water media; and (2) the contribution of underlying geometric uncertainties to the overall uncertainty of Monte Carlo derived dosimetric parameters according to the National Institute of Standards and Technology Report 1297 methodology. The theoretical value for the dose rate constant in water was 0.942 cGy U{sup -1} h{sup -1}{+-}1.76% [combined standard uncertainty (CSU) with coverage factor k=1] and the experimental value was 0.971 cGy U{sup -1} h{sup -1}{+-}6.1%. Agreement between experimental and theoretical radial dose function values was well within the k=1 CSU, while agreement between experimental and theoretical anisotropy function values was within the k=1 CSU only after incorporating the use of polar angle-dependent energy response functions. The angular dependence of the relative energy response was found to have a complex and significant dependence on measurement medium and internal geometry of the source.
Cox, L. J., LLNL
1997-06-16
Lawrence Livermore National Laboratory (LLNL) has developed an all-particle Monte Carlo radiotherapy dose calculation code--PEREGRINE--for use in clinical radiation oncology. For PEREGRINE, we have assembled high-energy evaluated nuclear databases; created radiation source characterization and sampling algorithms; and simulated and characterized clinical beams for treatment with photons, neutrons and protons. Spectra are available for the Harper Hospital (Detroit, U.S.A.) Be(d,n) neutron therapy beam, the National Accelerator Centre (NAC, Faure, S.A.) Be(p,n) neutron therapy beam and many of the operating modes of the Loma Linda University Medical Center (LLUMC, Loma Linda, USA) proton treatment center. These beam descriptions are being used in PEREGRINE for Monte Carlo dose calculations on clinical configurations for comparisons to measurements. The methods of defining and sampling the beam phase space characterizations are discussed. We show calculations using these clinical beams compared to measurements in homogeneous water phantoms. The state of PEREGRINE's high energy neutron and proton transport database, PCSL, is reviewed and the remaining issues involving nuclear data needs for PEREGRINE are addressed.
NASA Astrophysics Data System (ADS)
Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.
2014-12-01
We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.
G. P. Estes; R. G. Schrandt; J. T. Kriese
1988-01-01
A patch to the Los Alamos Monte Carlo code MCNP has been developed that automates the generation of source descriptions for photons from arbitrary mixtures and configurations of radioactive isotopes. Photon branching ratios for decay processes are obtained from national and international data bases and accesed directly from computer files. Code user input is generally confined to readily available information
Hard photon production and matrix-element parton-shower merging
Hoeche, Stefan [Institut fuer Theoretische Physik, Universitaet Zuerich, CH-8057 Zuerich (Switzerland); Schumann, Steffen [Institut fuer Theoretische Physik, Universitaet Heidelberg, D-69120, Heidelberg (Germany); Siegert, Frank [Institute for Particle Physics Phenomenology, Durham University, Durham DH1 3LE (United Kingdom); Department of Physics and Astronomy, University College London, London WC13 6BT (United Kingdom)
2010-02-01
We present a Monte Carlo approach to prompt-photon production, where photons and QCD partons are treated democratically. The photon fragmentation function is modeled by an interleaved QCD+QED parton shower. This known technique is improved by including higher-order real-emission matrix elements. To this end, we extend a recently proposed algorithm for merging matrix elements and truncated parton showers. We exemplify the quality of the Monte Carlo predictions by comparing them to measurements of the photon fragmentation function at LEP and to measurements of prompt photon and diphoton production from the Tevatron experiments.
Radiation Transport Calculations and Simulations
Fasso, Alberto; /SLAC; Ferrari, A.; /CERN
2011-06-30
This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.
Velchik, M.G.
1987-01-01
Recently, there has been a renewed interest in the detection and treatment of osteoporosis. This paper is a review of the merits and limitations of the various noninvasive modalities currently available for the measurement of bone mineral density with special emphasis placed upon the nuclear medicine techniques of single-photon and dual-photon absorptiometry. The clinicians should come away with an understanding of the relative advantages and disadvantages of photon absorptiometry and its optimal clinical application. 49 references.
C. Bairactaris; N. Demakopoulos; G. Tripsianis; C. Sioka; D. Farmakiotis; K. Vadikolias; I. Heliopoulos; P. Georgoulias; I. Tsougos; I. Papanastasiou; C. Piperidou
2009-01-01
To assess the impact of I-123 ioflupane single photon emission computed tomography (SPECT) imaging on classifying patients with striatal dopaminergic deficits. Sixty-one patients with an initial diagnosis of parkinsonism or uncertain tremor disorder were screened and followed-up for one year. All patients were re-examined by two neurologists at our centre and were classified as having neurodegenerative or non-neurodegenerative disorders. Patients
Enrico Allaria; Carlo Callegari; Daniele Cocco; William M. Fawley; Maya Kiskinova; Claudio Masciovecchio; Fulvio Parmigiani
2010-01-01
FERMI@Elettra comprises two free electron lasers (FELs) that will generate short pulses (tau~25-200 fs) of highly coherent radiation in the XUV and soft x-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability of producing high-quality, longitudinally coherent photon pulses. This capability, together with the possibilities of
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
NASA Astrophysics Data System (ADS)
España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M
2009-03-01
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation.
España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones. PMID:19242053
NASA Astrophysics Data System (ADS)
Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.
2013-09-01
Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.
Monte Carlo evaluation of DNA fragmentation spectra induced by different radiation qualities.
Alloni, D; Campa, A; Belli, M; Esposito, G; Mariotti, L; Liotta, M; Friedland, W; Paretzke, H; Ottolenghi, A
2011-02-01
The PARTRAC code has been developed constantly in the last several years. It is a Monte Carlo code based on an event-by-event description of the interactions taking place between the ionising radiation and liquid water, and in the present version simulates the transport of photons, electrons, protons, helium and heavier ions. This is combined with an atom-by-atom representation of the biological target, i.e. the DNA target model of a diploid human fibroblast in its interphase (genome of 6 Gigabase pairs). DNA damage is produced by the events of energy depositions, either directly, if they occur in the volume occupied by the sugar-phosphate backbone, or indirectly, if this volume is reached by radiation-induced radicals. This requires the determination of the probabilities of occurrence of DNA damage. Experimental data are essential for this determination. However, after the adjustment of the relevant parameters through the comparison of the simulation data with the DNA fragmentation induced by photon irradiation, the code has been used without further parameter adjustments, and the comparison with the fragmentation induced by charged particle beams has validated the code. In this paper, the results obtained for the DNA fragmentation induced by gamma rays and by charged particle beams of various LET are shown, with a particular attention to the production of very small fragments that are not detected in experiments. PMID:21084331
NASA Astrophysics Data System (ADS)
Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).
Papadimitroulas, Panagiotis; Loudos, George; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece) and Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Instruments Technology, Technological Educational institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece)
2012-08-15
Purpose: GATE is a Monte Carlo simulation toolkit based on the Geant4 package, widely used for many medical physics applications, including SPECT and PET image simulation and more recently CT image simulation and patient dosimetry. The purpose of the current study was to calculate dose point kernels (DPKs) using GATE, compare them against reference data, and finally produce a complete dataset of the total DPKs for the most commonly used radionuclides in nuclear medicine. Methods: Patient-specific absorbed dose calculations can be carried out using Monte Carlo simulations. The latest version of GATE extends its applications to Radiotherapy and Dosimetry. Comparison of the proposed method for the generation of DPKs was performed for (a) monoenergetic electron sources, with energies ranging from 10 keV to 10 MeV, (b) beta emitting isotopes, e.g., {sup 177}Lu, {sup 90}Y, and {sup 32}P, and (c) gamma emitting isotopes, e.g., {sup 111}In, {sup 131}I, {sup 125}I, and {sup 99m}Tc. Point isotropic sources were simulated at the center of a sphere phantom, and the absorbed dose was stored in concentric spherical shells around the source. Evaluation was performed with already published studies for different Monte Carlo codes namely MCNP, EGS, FLUKA, ETRAN, GEPTS, and PENELOPE. A complete dataset of total DPKs was generated for water (equivalent to soft tissue), bone, and lung. This dataset takes into account all the major components of radiation interactions for the selected isotopes, including the absorbed dose from emitted electrons, photons, and all secondary particles generated from the electromagnetic interactions. Results: GATE comparison provided reliable results in all cases (monoenergetic electrons, beta emitting isotopes, and photon emitting isotopes). The observed differences between GATE and other codes are less than 10% and comparable to the discrepancies observed among other packages. The produced DPKs are in very good agreement with the already published data, which allowed us to produce a unique DPKs dataset using GATE. The dataset contains the total DPKs for {sup 67}Ga, {sup 68}Ga, {sup 90}Y, {sup 99m}Tc, {sup 111}In, {sup 123}I, {sup 124}I, {sup 125}I, {sup 131}I, {sup 153}Sm, {sup 177}Lu {sup 186}Re, and {sup 188}Re generated in water, bone, and lung. Conclusions: In this study, the authors have checked GATE's reliability for absorbed dose calculation when transporting different kind of particles, which indicates its robustness for dosimetry applications. A novel dataset of DPKs is provided, which can be applied in patient-specific dosimetry using analytical point kernel convolution algorithms.
Bahram Jalali; Sasan Fathpour
2006-01-01
After dominating the electronics industry for decades, silicon is on the verge of becoming the material of choice for the photonics industry: the traditional stronghold of III-V semiconductors. Stimulated by a series of recent breakthroughs and propelled by increasing investments by governments and the private sector, silicon photonics is now the most active discipline within the field of integrated optics.
NSDL National Science Digital Library
Funded through a three-year grant from the Advanced Technological Education (ATE) program of the National Science Foundation (NSF), Project PHOTON2 builds on the highly successful "Alliance" model developed through the previous Project PHOTON.In both projects, educators from several geographic locations (four to six regions nationally) are brought together to facilitate photonics technology education at their institutions that is intelligently developed and seamlessly articulated. The â??Alliancesâ? consist of four to six participants per region, including high school and two- and four-year college science, technology, engineering, and math instructors, as well as their institution's career and admissions counselors. On this site, visitors will find curriculum materials, information about the PHOTON2 laboratory kit and careers in photonics, links to external tutorials and applets, and societies and organizations. Visitors can also find out more about the project, its team, newsletter, conference papers, workshop, and a distance learning course for educators.