Monte Carlo electron/photon transport
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs.
Adjoint electron-photon transport Monte Carlo calculations with ITS
Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.; Morel, J.E.
1995-02-01
A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.
The 3-D Monte Carlo neutron and photon transport code MCMG and its algorithms
Deng, L.; Hu, Z.; Li, G.; Li, S.; Liu, Z.
2012-07-01
The 3-D Monte Carlo neutron and photon transport parallel code MCMG is developed. A new collision mechanism based on material but not nuclide is added in the code. Geometry cells and surfaces can be dynamically extended. Combination of multigroup and continuous cross-section transport is developed. The multigroup scattering is expansible to P5 and upper scattering is considered. Various multigroup libraries can be easily equipped in the code. The same results with the experiments and the MCNP code are obtained for a series of modes. The speedup of MCMG is a factor of 2-4 relative to the MCNP code in speed. (authors)
Monte Carlo photon transport on vector and parallel superconductors: Final report
Martin, W.R.; Nowak, P.F.
1987-09-30
The vectorized Monte Carlo photon transport code VPHOT has been developed for the Cray-1, Cray-XMP, and Cray-2 computers. The effort in the current project was devoted to multitasking the VPHOT code and implement it on the Cray X-MP and Cray-2 parallel-vector supercomputers, examining the robustness of the vectorized algorithm for changes in the physics of the test problems, and evaluating the efficiency of alternative algorithms such as the ''stack-driven'' algorithm of Bobrowicz for possible incorporation into VPHOT. These tasks are discussed in this paper. 4 refs.
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
MCNP: a general Monte Carlo code for neutron and photon transport
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport
NASA Astrophysics Data System (ADS)
Hayward, Robert M.; Rahnema, Farzad
2014-06-01
Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.
Space applications of the MITS electron-photon Monte Carlo transport code system
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ''Carlos Haya'', Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)
NASA Astrophysics Data System (ADS)
Kum, Oyeon
2004-11-01
Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.
Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport
NASA Astrophysics Data System (ADS)
Jia, Xun; Gu, Xuejun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B.
2010-06-01
Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor.
Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
The MOSSO Program for Monte Carlo simulation of electron and photon transport
Konyukov, V.V.; Krainyukov, V.I.; Maev, G.A.; Nosyrev, V.I.; Trufanov, A.I.
1995-03-01
A model was developed to simulate the transport of electrons, positrons, and {gamma}-photons of energies ranging from 0.001 to 100 MeV in laminate multicomponent structures. The software environment created for this program facilitates the following: modeling the radiation source, preparing the structure to be investigated (materials and geometry), controlling the imitation of particle penetration through the layers, and analyzing the absorbed doses and the spectra of primary knocked-off atoms.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Kirk, B.L.
1985-12-01
The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan (Cornell University)
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
Recent advances in the Mercury Monte Carlo particle transport code
Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.
2013-07-01
We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Implementation of a Monte Carlo method to model photon conversion for solar cells
C. del Cañizo; I. Tobías; J. Perezbedmar; A. C. Pan; A. Luque
2008-01-01
A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different
NOTE: An efficient framework for photon Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Fix, Michael K.; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J.
2007-09-01
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient. This work was presented in part at the First European Workshop on Monte Carlo Treatment Planning (EWG-MCTP) held in Gent, Belgium from 22 to 25 October 2006.
Precise Monte Carlo Simulation of Single-Photon Detectors
Mario Stip?evi?; Daniel J. Gauthier
2014-11-13
We demonstrate the importance and utility of Monte Carlo simulation of single-photon detectors. Devising an optimal simulation is strongly influenced by the particular application because of the complexity of modern, avalanche-diode-based single-photon detectors.. Using a simple yet very demanding example of random number generation via detection of Poissonian photons exiting a beam splitter, we present a Monte Carlo simulation that faithfully reproduces the serial autocorrelation of random bits as a function of detection frequency over four orders of magnitude of the incident photon flux. We conjecture that this simulation approach can be easily modified for use in many other applications.
Applications of the Monte Carlo radiation transport toolkit at LLNL
NASA Astrophysics Data System (ADS)
Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine
1999-09-01
Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and ? eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute ?-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
1989-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
McKinley, M S; Brooks III, E D; Daffin, F
2004-12-13
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
NASA Astrophysics Data System (ADS)
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656
Photon beam description in PEREGRINE for Monte Carlo dose calculations
Cox, L. J., LLNL
1997-03-04
Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.
Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
Monte Carlo Estimate to Improve Photon Energy Spectrum Reconstruction
NASA Astrophysics Data System (ADS)
Sawchuk, S.
Improvements to planning radiation treatment for cancer patients and quality control of medical linear accelerators (linacs) can be achieved with the explicit knowledge of the photon energy spectrum. Monte Carlo (MC) simulations of linac treatment heads and experimental attenuation analysis are among the most popular ways of obtaining these spectra. Attenuation methods which combine measurements under narrow beam geometry and the associated calculation techniques to reconstruct the spectrum from the acquired data are very practical in a clinical setting and they can also serve to validate MC simulations. A novel reconstruction method [1] which has been modified [2] utilizes a Simpson's rule (SR) to approximate and discretize (1)
Fiber transport of spatially entangled photons
NASA Astrophysics Data System (ADS)
Löffler, W.; Eliel, E. R.; Woerdman, J. P.; Euser, T. G.; Scharrer, M.; Russell, P.
2012-03-01
High-dimensional entangled photons pairs are interesting for quantum information and cryptography: Compared to the well-known 2D polarization case, the stronger non-local quantum correlations could improve noise resistance or security, and the larger amount of information per photon increases the available bandwidth. One implementation is to use entanglement in the spatial degree of freedom of twin photons created by spontaneous parametric down-conversion, which is equivalent to orbital angular momentum entanglement, this has been proven to be an excellent model system. The use of optical fiber technology for distribution of such photons has only very recently been practically demonstrated and is of fundamental and applied interest. It poses a big challenge compared to the established time and frequency domain methods: For spatially entangled photons, fiber transport requires the use of multimode fibers, and mode coupling and intermodal dispersion therein must be minimized not to destroy the spatial quantum correlations. We demonstrate that these shortcomings of conventional multimode fibers can be overcome by using a hollow-core photonic crystal fiber, which follows the paradigm to mimic free-space transport as good as possible, and are able to confirm entanglement of the fiber-transported photons. Fiber transport of spatially entangled photons is largely unexplored yet, therefore we discuss the main complications, the interplay of intermodal dispersion and mode mixing, the influence of external stress and core deformations, and consider the pros and cons of various fiber types.
Status of Vectorized Monte Carlo for Particle Transport Analysis
William R. Martin; Forrest B. Brown
1987-01-01
The conventional particle transport Monte Carlo algorithm is ill suited for modem vector supercomputers because the random nature of the particle transport process in the history based algorithm in hibits construction of vectors. An alterna tive, event-based algorithm is suitable for vectorization and has been used recently to achieve impressive gains in perfor mance on vector supercomputers. This re view
Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G
2008-01-01
The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410
Resonance fluorescence near a photonic band edge: Dressedstate Monte Carlo wavefunction approach
John, Sajeev
Resonance fluorescence near a photonic band edge: DressedÂstate Monte Carlo waveÂfunction approach, Ontario, Canada M5S 1A7 ~Received 2 June 1997! We introduce a dressedÂstate Monte Carlo waveÂfunction frequencies. In this paper we introduce a dressedÂstate Monte Carlo waveÂfunction ~MCWF! technique @26
COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT
W. R. MARTIN; F. B. BROWN
2001-03-01
Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.
Shift: A Massively Parallel Monte Carlo Radiation Transport Package
Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL
2015-01-01
This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, de- veloped at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.
ITS: the integrated TIGER series of electron\\/photon transport codes-Version 3.0
John A. Halbleib; Ronald P. Kensek; Greg D. Valdez; Stephen M. Seltzer; Martin J. Berger
1992-01-01
The ITS system is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron\\/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Version 3.0 is a major upgrade of the system with important improvements in the physical model. Improvements in the Monte Carlo codes
Computational methods of electron/photon transport
Mack, J.M.
1983-01-01
A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated.
Electromagnetic energy transport in finite photonic structures.
de Dios-Leyva, M; Duque, C A; Drake-Pérez, J C
2014-06-01
We have derived, for oblique propagation, an equation relating the averaged energy flux density to energy fluxes arising in the process of scattering by a lossless finite photonic structure. The latter fluxes include those associated with the dispersion relation of the structure, reflection, and interference between the incident and reflected waves. We have also derived an explicit relation between the energy flux density and the group velocity, which provides a simple and systematical procedure for studying theoretically and experimentally the properties of the energy transport through a wide variety of finite photonic structures. Such a relation may be regarded as a generalization of the corresponding one for infinite periodic systems to finite photonic structures. A finite, N-period, photonic crystal was used to illustrate the usefulness of our results. PMID:24921471
Monte Carlo simulation of neutron transport in intense neutron fields
W. K. Matthes
2008-01-01
Common Monte Carlo (MC) codes for neutron transport are usually applied to neutron fields of low density under the assumption that the isotopic composition of the structure materials will not be changed in neutron reactions. This assumption is no longer valid in intense neutron fields, where an appreciable number of nuclei of the structural material may get transformed into other
Cheung, Joel Y.C.; Yu, K.N. [Gamma Knife Centre, Canossa Hospital, 1 Old Peak Road, Hong Kong (China); Department of Physics and Materials Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong (China)
2006-01-15
In the algorithm of Leksell GAMMAPLAN (the treatment planning software of Leksell Gamma Knife), scattered photons from the collimator system are presumed to have negligible effects on the Gamma Knife dosimetry. In this study, we used the EGS4 Monte Carlo (MC) technique to study the scattered photons coming out of the single beam channel of Leksell Gamma Knife. The PRESTA (Parameter Reduced Electron-Step Transport Algorithm) version of the EGS4 (Electron Gamma Shower version 4) MC computer code was employed. We simulated the single beam channel of Leksell Gamma Knife with the full geometry. Primary photons were sampled from within the {sup 60}Co source and radiated isotropically in a solid angle of 4{pi}. The percentages of scattered photons within all photons reaching the phantom space using different collimators were calculated with an average value of 15%. However, this significant amount of scattered photons contributes negligible effects to single beam dose profiles for different collimators. Output spectra were calculated for the four different collimators. To increase the efficiency of simulation by decreasing the semiaperture angle of the beam channel or the solid angle of the initial directions of primary photons will underestimate the scattered component of the photon fluence. The generated backscattered photons from within the {sup 60}Co source and the beam channel also contribute to the output spectra.
Monte-Carlo Study of Axonal Transport in a Neuron
NASA Astrophysics Data System (ADS)
Shrestha, Uttam; Yu, Clare; Jia, Zhiyuan; Erickson, Robert; Gross, Steven
2011-03-01
A living cell has an infrastructure much like that of a city. A key component is the transportation system that consists of roads (filaments) and molecular motors (proteins) that haul cargo along these roads. We will present a Monte Carlo simulation of intracellular transport inside an axon in which motor proteins carry cargos along microtubules and are able to switch from one microtubule to another. The breakdown of intracellular transport in neurons has been associated with neurodegenerative diseases such as Alzheimer's, Lou Gehig's disease (ALS), and Huntingdon's disease. This work was supported by NIGMS grant number 5R01GM79156.
Ivannikov, A I; Tikunov, D D; Borysheva, N B; Trompier, F; Skvortsov, V G; Stepanenko, V F; Hoshi, M
2004-01-01
The experimental energy dependence of the electron paramagnetic resonance (EPR) radiation-induced signal at irradiation by photons in the energy range of 13 keV-1.25 MeV was analysed in terms of the absorbed dose in human tooth enamel. The latter was calculated using a Monte Carlo simulation of the photon and electron transport. The dependence of the calculated absorbed dose on the sample thickness was analysed. No energy dependence of the EPR signal on the absorbed dose in enamel was verified in the range of 37 keV-1.25 MeV. At 13 and 20 keV the EPR signal dose response was reduced by 8% probably due to sample powdering. Dose-depth profiles in enamel samples irradiated by 1.25 MeV photons in polymethylmethacrylate and aluminium build-up materials were calculated. It was concluded that secondary electron equilibrium conditions are better fulfilled for irradiation in aluminium, which makes this material preferable for calibration. PMID:15103060
Optics and photonics used in road transportation
NASA Astrophysics Data System (ADS)
Gingras, Denis J.
1998-09-01
Photonics is ideal for precise, remote and contactless measurements in harsh conditions. Thanks to major breakthroughs in the technologies involved, optical sensing is becoming more compact, robust and affordable. The purpose of this paper is to provide an overview on the capabilities of photonics applied to road transportation problems. In particular we will consider four types of situations: (1) measurements for traffic analysis and surveillance, (2) measurements for road infrastructures diagnosis and quality assessment, (3) photonics in smart driving and intelligent vehicles and (4) measurements for other purposes (safety, inventories, tolls etc.). These topics will be discussed and illustrated by using the results of different projects that have been carried out at INO over the last few years. We will look at different challenges we had to face such as performing sensitive optical measurements in various outdoor illumination conditions and performing fast and accurate measurements without interfering with normal road traffic flow.
Photon-induced carrier transport in high efficiency midinfrared quantum cascade lasers
Mátyás, Alpár; Jirauschek, Christian; 10.1063/1.3608116
2011-01-01
A midinfrared quantum cascade laser with high wall-plug efficiency is analyzed by means of an ensemble Monte Carlo method. Both the carrier transport and the cavity field dynamics are included in the simulation, offering a self-consistent approach for analyzing and optimizing the laser operation. It is shown that at low temperatures, photon emission and absorption can govern the carrier transport in such devices. Furthermore, we find that photon-induced scattering can strongly affect the kinetic electron distributions within the subbands. Our results are validated against available experimental data.
Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method
NASA Astrophysics Data System (ADS)
Moralles, M.; Guimarães, C. C.; Okuno, E.
2005-06-01
Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.
Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method
NASA Astrophysics Data System (ADS)
Wang, Wen; Cheng, Meng-Yun; Long, Peng-Cheng; Hu, Li-Qin
2015-07-01
The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom, which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team. The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models. The differences were due to the racial and anatomical differences in organ mass and inter-organ distance. The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03040000), National Natural Science Foundation of China (910266004, 11305205, 11305203) and National Special Program for ITER (2014GB112001)
Monte Carlo Neutrino Transport in Post-Merger Disks
NASA Astrophysics Data System (ADS)
Richers, Sherwood Andrew; Kasen, Daniel; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-08-01
The merger of two neutron stars or a neutron star and a black hole are the prime candidate models for short-duration gamma ray bursts and production of r-process elements. Neutrinos can carry away energy and change the ratio of neutrons to protons, in turn affecting the appearance and dynamics of the burst and the types of elements formed from the outflow. We simulate Monte Carlo transport of neutrinos through the accretion disk surrounding the post-merger black hole and/or hypermassive neutron star to explore the influence of neutrinos on the disk composition and temperature profile, and discover faster thermal and composition evolution by a factor of a few when using Monte Carlo as opposed to neutrino leakage. Additionally, we demonstrate smaller (maximum 20%) differences when employing a simplified set of neutrino interactions commonly used in dynamical simulations.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
NASA Astrophysics Data System (ADS)
Pratx, Guillem; Xing, Lei
2011-12-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.
Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce
Pratx, Guillem; Xing, Lei
2011-01-01
Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916
Current status of the PSG Monte Carlo neutron transport code
Leppaenen, J.
2006-07-01
PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)
Controlling photon transport in the single-photon weak-coupling regime of cavity optomechanics
Wen-Zhao Zhang; Jiong Cheng; Jing-Yi Liu; Ling Zhou
2015-07-01
We study the photon statistics properties of few-photon transport in an optomechanical system where an optomechanical cavity couples to two empty cavities. By analytically deriving the one- and two-photon currents in terms of a zero-time-delayed two-order correlation function, we show that a photon blockade can be achieved in both the single-photon strong-coupling regime and the single-photon weak-coupling regime due to the nonlinear interacting and multipath interference. Furthermore, our systems can be applied as a quantum optical diode, a single-photon source, and a quantum optical capacitor. It is shown that this the photon transport controlling devices based on photon antibunching does not require the stringent single-photon strong-coupling condition. Our results provide a promising platform for the coherent manipulation of optomechanics, which has potential applications for quantum information processing and quantum circuit realization.
Monte Carlo solution of a semi-discrete transport equation
Urbatsch, T.J.; Morel, J.E.; Gulick, J.C.
1999-09-01
The authors present the S{sub {infinity}} method, a hybrid neutron transport method in which Monte Carlo particles traverse discrete space. The goal of any deterministic/stochastic hybrid method is to couple selected characters from each of the methods in hopes of producing a better method. The S{sub {infinity}} method has the features of the lumped, linear-discontinuous (LLD) spatial discretization, yet it has no ray-effects because of the continuous angular variable. They derive the S{sub {infinity}} method for the solid-state, mono-energetic transport equation in one-dimensional slab geometry with isotropic scattering and an isotropic internal source. They demonstrate the viability of the S{sub {infinity}} method by comparing their results favorably to analytic and deterministic results.
Comparison of dose calculation algorithms with Monte Carlo methods for photon arcs.
Chow, James C L; Wong, Eugene; Chen, Jeff Z; Van Dyk, Jake
2003-10-01
The objective of this study is to seek an accurate and efficient method to calculate the dose distribution of a photon arc. The algorithms tested include Monte Carlo, pencil beam kernel (PK), and collapsed cone convolution (CCC). For the Monte Carlo dose calculation, EGS4/DOSXYZ was used. The SRCXYZ source code associated with the DOSXYZ was modified so that the gantry angle of a photon beam would be sampled uniformly within the arc range about an isocenter to simulate a photon arc. Specifically, photon beams (6/18 MV, 4 x 4 and 10 x 10 cm2) described by a phase space file generated by BEAM (MCPHS), or by two point sources with different photon energy spectra (MCDIV) were used. These methods were used to calculate three-dimensional (3-D) distributions in a PMMA phantom, a cylindrical water phantom, and a phantom with lung inhomogeneity. A commercial treatment planning system was also used to calculate dose distributions in these phantoms using equivalent tissue air ratio (ETAR), PK and CCC algorithms for inhomogeneity corrections. Dose distributions for a photon arc in these phantoms were measured using a RK ion chamber and radiographic films. For homogeneous phantoms, the measured results agreed well (approximately 2% error) with predictions by the Monte Carlo simulations (MCPHS and MCDIV) and the treatment planning system for the 180 degrees and 360 degrees photon arcs. For the dose distribution in the phantom with lung inhomogeneity with a 90 degrees photon arc, the Monte Carlo calculations agreed with the measurements within 2%, while the treatment planning system using ETAR, PK and CCC underestimated or overestimated the dose inside the lung inhomogeneity from 6% to 12%. PMID:14596305
Characterization of a novel micro-irradiator using Monte Carlo radiation transport simulations
NASA Astrophysics Data System (ADS)
Rodriguez, Manuel; Jeraj, Robert
2008-06-01
Small animals are highly valuable resources for radiobiology research. While rodents have been widely used for decades, zebrafish embryos have recently become a very popular research model. However, unlike rodents, zebrafish embryos lack appropriate irradiation tools and methodologies. Therefore, the main purpose of this work is to use Monte Carlo radiation transport simulations to characterize dosimetric parameters, determine dosimetric sensitivity and help with the design of a new micro-irradiator capable of delivering irradiation fields as small as 1.0 mm in diameter. The system is based on a miniature x-ray source enclosed in a brass collimator with 3 cm diameter and 3 cm length. A pinhole of 1.0 mm diameter along the central axis of the collimator is used to produce a narrow photon beam. The MCNP5, Monte Carlo code, is used to study the beam energy spectrum, percentage depth dose curves, penumbra and effective field size, dose rate and radiation levels at 50 cm from the source. The results obtained from Monte Carlo simulations show that a beam produced by the miniature x-ray and the collimator system is adequate to totally or partially irradiate zebrafish embryos, cell cultures and other small specimens used in radiobiology research.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Modeling photon transport in transabdominal fetal oximetry
NASA Astrophysics Data System (ADS)
Jacques, Steven L.; Ramanujam, Nirmala; Vishnoi, Gargi; Choe, Regine; Chance, Britton
2000-07-01
The possibility of optical oximetry of the blood in the fetal brain measured across the maternal abdomen just prior to birth is under investigated. Such measurements could detect fetal distress prior to birth and aid in the clinical decision regarding Cesarean section. This paper uses a perturbation method to model photon transport through a 8- cm-diam fetal brain located at a constant 2.5 cm below a curved maternal abdominal surface with an air/tissue boundary. In the simulation, a near-infrared light source delivers light to the abdomen and a detector is positioned up to 10 cm from the source along the arc of the abdominal surface. The light transport [W/cm2 fluence rate per W incident power] collected at the 10 cm position is Tm equals 2.2 X 10-6 cm-2 if the fetal brain has the same optical properties as the mother and Tf equals 1.0 X 10MIN6 cm-2 for an optically perturbing fetal brain with typical brain optical properties. The perturbation P equals (Tf - Tm)/Tm is -53% due to the fetal brain. The model illustrates the challenge and feasibility of transabdominal oximetry of the fetal brain.
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
Henzen, D. Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K.; Neuenschwander, H.; Stampanoni, M. F. M.
2014-02-15
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.
Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne, 5000 Monastir (Tunisia); Faculte des Sciences de Monastir, 5000 Monastir (Tunisia)
2012-10-15
Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.
Electron transport in magnetrons by a posteriori Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Costin, C.; Minea, T. M.; Popa, G.
2014-02-01
Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.
Hocine, Nora; Donadille, Laurent; Huet, Christelle; Itié, Christian; Clairand, Isabelle
2011-03-01
This paper describes the results of the simulation of a radiophotoluminescent (RPL) dosemeter with the Monte Carlo transport code MCNPX. The aim of this study is to calculate the response with MCNPX of the RPL dosemeter in terms of equivalent doses H(p) (0.07) and H(p)(10) using X-ray photon radiation qualities N series, together with S-Cs and S-Co nuclide radiation qualities, specified in ISO 4037-1. After comparison with reference values versus experimental results, the deviation of the theoretical responses of the RPL dosemeter proved to be lower than 5 % for reference values and lower than 10 % for experimental results. This good correlation validates the model over the energy range studied. PMID:21335330
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M. [The University of Tennessee Space Institute, B. H. Goethert Parkway, MS 21, Tullahoma, Tennessee 37388-8897 (United States)
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Neutron and Photon Transport in Sea-Going Cargo Containers
Pruet, J; Descalle, M; Hall, J; Pohl, B; Prussin, S G
2005-02-09
Factors affecting sensing of small quantities of fissionable material in large sea-going cargo containers by neutron interrogation and detection of {beta}-delayed photons are explored. The propagation of variable-energy neutrons in cargos, subsequent fission of hidden nuclear material and production of the {beta}-delayed photons, and the propagation of these photons to an external detector are considered explicitly. Detailed results of Monte Carlo simulations of these stages in representative cargos are presented. Analytical models are developed both as a basis for a quantitative understanding of the interrogation process and as a tool to allow ready extrapolation of the results to cases not specifically considered here.
A deterministic computational model for the two dimensional electron and photon transport
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Nealy, John E.
2014-12-01
A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles ? where -?/2transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.
Generational Structure of Photon Transport in Water and Lead
NASA Astrophysics Data System (ADS)
Chen, Heng-Gung
A generational structure was utilized in the present work to study the build-up process of secondary photons in water and in lead using EGS4 Monte Carlo simulation code. A broad, parallel photon beam normally incident on semi-infinite medium for energies 0.5 MeV to 5 MeV was assumed. A number of quantities for photons of each generation as well as for the total photon fluence were calculated with 5 times 10^6 histories in each case and fitted to empirical formulas. These are the average photon energy, the average energy absorbed (transferred) per photon interaction, effective interaction coefficient, effective scattering coefficient, photon fluence, energy fluence build-up factor, and dose build-up factor. In addition, the photon energy spectrum of each generation as well as those of the total photon fluence are scored at various depths up to 13 Mean free paths of the primary photons. Energy fluence build-up factors are compared to the values calculated by Goldstein and Wilkins; most of the data are in agreement except for 0.5 MeV and 1 MeV cases in lead which more significant difference were found.
NASA Astrophysics Data System (ADS)
Lin, Yuting; McMahon, Stephen J.; Scarpelli, Matthew; Paganetti, Harald; Schuemann, Jan
2014-12-01
Gold nanoparticles (GNPs) have shown potential to be used as a radiosensitizer for radiation therapy. Despite extensive research activity to study GNP radiosensitization using photon beams, only a few studies have been carried out using proton beams. In this work Monte Carlo simulations were used to assess the dose enhancement of GNPs for proton therapy. The enhancement effect was compared between a clinical proton spectrum, a clinical 6?MV photon spectrum, and a kilovoltage photon source similar to those used in many radiobiology lab settings. We showed that the mechanism by which GNPs can lead to dose enhancements in radiation therapy differs when comparing photon and proton radiation. The GNP dose enhancement using protons can be up to 14 and is independent of proton energy, while the dose enhancement is highly dependent on the photon energy used. For the same amount of energy absorbed in the GNP, interactions with protons, kVp photons and MV photons produce similar doses within several nanometers of the GNP surface, and differences are below 15% for the first 10?nm. However, secondary electrons produced by kilovoltage photons have the longest range in water as compared to protons and MV photons, e.g. they cause a dose enhancement 20 times higher than the one caused by protons 10??m away from the GNP surface. We conclude that GNPs have the potential to enhance radiation therapy depending on the type of radiation source. Proton therapy can be enhanced significantly only if the GNPs are in close proximity to the biological target.
Monte Carlo simulation of electron transport in narrow gap heterostructures
NASA Astrophysics Data System (ADS)
Thobel, Jean-Luc; Bonno, Olivier; Dessenne, Francois; Boutry, Herve
2002-11-01
A Monte Carlo method is proposed for the study of in-plane electron transport in narrow gap heterostructures. Special attention is paid to the consequences of the strong nonparabolicity of the conduction band. The electron states are calculated within the framework of envelope function theory, which leads to a Schrodinger equation with an energy-dependent effective mass. This equation is solved in a numerically efficient way by including a standard eigenvalue solver in an iterative method. The mixing between conduction and valence band states is taken into account, at an approximate level, through a "Bloch overlap factor," defined by analogy with the case of three-dimensional transport. This model was applied to a typical AlSb/InAs single well structure, and realistic results were obtained. The important role played by the Bloch overlap factor is demonstrated. When it is neglected, the mobility is strongly underestimated. A more sophisticated double well structure was also investigated. It is intended to reduce impact ionization, thanks to transfer toward the thinner well. This transfer is found to depend strongly on the potential profile.
Modelling 6 MV photon beams of a stereotactic radiosurgery system for Monte Carlo treatment planning
NASA Astrophysics Data System (ADS)
Deng, Jun; Guerrero, Thomas; Ma, C.-M.; Nath, Ravinder
2004-05-01
The goal of this work is to build a multiple source model to represent the 6 MV photon beams from a Cyberknife stereotactic radiosurgery system for Monte Carlo treatment planning dose calculations. To achieve this goal, the 6 MV photon beams have been characterized and modelled using the EGS4/BEAM Monte Carlo system. A dual source model has been used to reconstruct the particle phase space at a plane immediately above the secondary collimator. The proposed model consists of two circular planar sources for the primary photons and the scattered photons, respectively. The dose contribution of the contaminant electrons was found to be in the order of 10-3 of the total maximum dose and therefore has been omitted in the source model. Various comparisons have been made to verify the dual source model against the full phase space simulated using the EGS4/BEAM system. The agreement in percent depth dose (PDD) curves and dose profiles between the phase space and the source model was generally within 2%/1 mm for various collimators (5 to 60 mm in diameter) at 80 to 100 cm source-to-surface distances (SSD). Excellent agreement (within 1%/1 mm) was also found between the dose distributions in heterogeneous lung and bone geometry calculated using the original phase space and those calculated using the source model. These results demonstrated the accuracy of the dual source model for Monte Carlo treatment planning dose calculations for the Cyberknife system.
Monte Carlo impurity transport modeling in the DIII-D transport
Evans, T.E. [General Atomics, San Diego, CA (United States); Finkenthal, D.F. [Palomar College, San Marcos, CA (United States)
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI`s unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed.
Photon-mediated electron transport in hybrid circuit-QED
Neill Lambert; Christian Flindt; Franco Nori
2013-07-25
We investigate photon-mediated transport processes in a hybrid circuit-QED structure consisting of two double quantum dots coupled to a common microwave cavity. Under suitable resonance conditions, electron transport in one double quantum dot is facilitated by the transport in the other dot via photon-mediated processes through the cavity. We calculate the average current in the quantum dots, the mean cavity photon occupation, and the current cross-correlations with both a full numerical simulation and a recursive perturbation scheme that allows us to include the influence of the cavity order-by-order in the couplings between the cavity and the quantum dot systems. We can then clearly identify the photon-mediated transport processes.
A Fano cavity test for Monte Carlo proton transport algorithms
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-15
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE{sub 0} and a mass macroscopic cross section of (?)/(?) are transported, having the ability to generate protons with kinetic energy E{sub 0} and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (?E{sub 0})/(?) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm{sup 2} parallel virtual field and a cavity (2 × 2 × 0.2 cm{sup 3} size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not small enough. Conclusions: Using conservative user-defined simulation parameters, both PENH and Geant4 pass the Fano cavity test for proton transport. Our methodology is applicable to any kind of charged particle, provided that the considered MC code is able to track the charged particle considered.
Photon transport in a dissipative chain of nonlinear cavities
NASA Astrophysics Data System (ADS)
Biella, Alberto; Mazza, Leonardo; Carusotto, Iacopo; Rossini, Davide; Fazio, Rosario
2015-05-01
By means of numerical simulations and the input-output formalism, we study photon transport through a chain of coupled nonlinear optical cavities subject to uniform dissipation. Photons are injected from one end of the chain by means of a coherent source. The propagation through the array of cavities is sensitive to the interplay between the photon hopping strength and the local nonlinearity in each cavity. We characterize photon transport by studying the populations and the photon correlations as a function of the cavity position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single-photon and multiphoton resonances directly reflects the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. The numerical algorithm we use, based on the time-evolving block decimation scheme adapted to mixed states, allows us to simulate large arrays (up to 60 cavities). The scaling of photon transmission with the number of cavities does depend on the structure of the many-body photon states inside the array.
Berg, Eric; Roncali, Emilie; Cherry, Simon R
2015-06-01
Achieving excellent timing resolution in gamma ray detectors is crucial in several applications such as medical imaging with time-of-flight positron emission tomography (TOF-PET). Although many factors impact the overall system timing resolution, the statistical nature of scintillation light, including photon production and transport in the crystal to the photodetector, is typically the limiting factor for modern scintillation detectors. In this study, we investigated the impact of surface treatment, in particular, roughening select areas of otherwise polished crystals, on light transport and timing resolution. A custom Monte Carlo photon tracking tool was used to gain insight into changes in light collection and timing resolution that were observed experimentally: select roughening configurations increased the light collection up to 25% and improved timing resolution by 15% compared to crystals with all polished surfaces. Simulations showed that partial surface roughening caused a greater number of photons to be reflected towards the photodetector and increased the initial rate of photoelectron production. This study provides a simple method to improve timing resolution and light collection in scintillator-based gamma ray detectors, a topic of high importance in the field of TOF-PET. Additionally, we demonstrated utility of our Monte Carlo simulation tool to accurately predict the effect of altering crystal surfaces on light collection and timing resolution. PMID:26114040
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation
Wu, Y.; Modest, M.F.; Haworth, D.C. . E-mail: dch12@psu.edu
2007-05-01
A high-order photon Monte Carlo method is developed to solve the radiative transfer equation. The statistical and discretization errors of the computed radiative heat flux and radiation source term are isolated and quantified. Up to sixth-order spatial accuracy is demonstrated for the radiative heat flux, and up to fourth-order accuracy for the radiation source term. This demonstrates the compatibility of the method with high-fidelity direct numerical simulation (DNS) for chemically reacting flows. The method is applied to address radiative heat transfer in a one-dimensional laminar premixed flame and a statistically one-dimensional turbulent premixed flame. Modifications of the flame structure with radiation are noted in both cases, and the effects of turbulence/radiation interactions on the local reaction zone structure are revealed for the turbulent flame. Computational issues in using a photon Monte Carlo method for DNS of turbulent reacting flows are discussed.
NASA Astrophysics Data System (ADS)
Siebers, J. V.; Keall, P. J.; Nahum, A. E.; Mohan, R.
2000-04-01
Current clinical experience in radiation therapy is based upon dose computations that report the absorbed dose to water, even though the patient is not made of water but of many different types of tissue. While Monte Carlo dose calculation algorithms have the potential for higher dose accuracy, they usually transport particles in and compute the absorbed dose to the patient media such as soft tissue, lung or bone. Therefore, for dose calculation algorithm comparisons, or to report dose to water or tissue contained within a bone matrix for example, a method to convert dose to the medium to dose to water is required. This conversion has been developed here by applying Bragg-Gray cavity theory. The dose ratio for 6 and 18 MV photon beams was determined by computing the average stopping power ratio for the primary electron spectrum in the transport media. For soft tissue, the difference between dose to medium and dose to water is approximately 1.0%, while for cortical bone the dose difference exceeds 10%. The variation in the dose ratio as a function of depth and position in the field indicates that for photon beams a single correction factor can be used for each particular material throughout the field for a given photon beam energy. The only exception to this would be for the clinically non-relevant dose to air. Pre-computed energy spectra for 60 Co to 24 MV are used to compute the dose ratios for these photon beams and to determine an effective energy for evaluation of the dose ratio.
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of detector responses and dose conversion factors, as well as increases of the RBE have to be anticipated. Parameter P(D)(200 keV) can also be used as a guidance supporting the selection of a calibration geometry suitable for radiation dosimeters to be used in small radiation fields. PMID:21530198
Dissipationless electron transport in photon-dressed nanostructures.
Kibis, O V
2011-09-01
It is shown that the electron coupling to photons in field-dressed nanostructures can result in the ground electron-photon state with a nonzero electric current. Since the current is associated with the ground state, it flows without the Joule heating of the nanostructure and is nondissipative. Such a dissipationless electron transport can be realized in strongly coupled electron-photon systems with the broken time-reversal symmetry--particularly, in quantum rings and chiral nanostructures dressed by circularly polarized photons. PMID:21981519
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
Identifying key surface parameters for optical photon transport in GEANT4/GATE simulations.
Nilsson, Jenny; Cuplov, Vesna; Isaksson, Mats
2015-09-01
For a scintillator used for spectrometry, the generation, transport and detection of optical photons have a great impact on the energy spectrum resolution. A complete Monte Carlo model of a scintillator includes a coupled ionizing particle and optical photon transport, which can be simulated with the GEANT4 code. The GEANT4 surface parameters control the physics processes an optical photon undergoes when reaching the surface of a volume. In this work the impact of each surface parameter on the optical transport was studied by looking at the optical spectrum: the number of detected optical photons per ionizing source particle from a large plastic scintillator, i.e. the output signal. All simulations were performed using GATE v6.2 (GEANT4 Application for Tomographic Emission). The surface parameter finish (polished, ground, front-painted or back-painted) showed the greatest impact on the optical spectrum whereas the surface parameter ??, which controls the surface roughness, had a relatively small impact. It was also shown how the surface parameters reflectivity and reflectivity types (specular spike, specular lobe, Lambertian and backscatter) changed the optical spectrum depending on the probability for reflection and the combination of reflectivity types. A change in the optical spectrum will ultimately have an impact on a simulated energy spectrum. By studying the optical spectra presented in this work, a GEANT4 user can predict the shift in an optical spectrum caused be the alteration of a specific surface parameter. PMID:26046519
An automated variance reduction method for global Monte Carlo neutral particle transport problems
NASA Astrophysics Data System (ADS)
Cooper, Marc Andrew
A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.
Walsh, Jonathan A. (Jonathan Alan)
2014-01-01
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies ...
Northum, Jeremy Dell
2011-08-08
The purpose of this study was to determine how well the Monte Carlo transport code FLUKA can simulate a tissue-equivalent proportional counter (TEPC) and produce the expected delta ray events when exposed to high energy ...
Domain decomposition for Monte Carlo particle transport simulations of nuclear reactors
Horelik, Nicholas E. (Nicholas Edward)
2015-01-01
Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards ...
Yu, Peter K.N.
Study of scattered photons from the collimator system of Leksell Gamma Knife using the EGS4 Monte Carlo Code Joel Y. C. Cheung Gamma Knife Centre, Canossa Hospital, 1 Old Peak Road, Hong Kong K. N. Yua Gamma Knife , scattered photons from the collimator system are presumed to have negligible effects
PERFORMANCE MEASUREMENT OF MONTE CARLO PHOTON TRANSPORT ON PARALLEL MACHINES
Majumdar, Amit
of the algorithm were developed. The first version is for the Tera Multi-Threaded Architecture (MTA) and uses Tera architectures. The different parallel architectures targeted are the shared memory TERA MTA, the distributed memory Cray T3E and the 8-way SMP IBM SP with Power3 processors. For the Tera MTA, directives specific
Soft Photons from transport and hydrodynamics at FAIR energies
Andreas Grimm; Bjørn Bäuchle
2012-11-11
Direct photon spectra from uranium-uranium collisions at FAIR energies (E(lab) = 35 AGeV) are calculated within the hadronic Ultra-relativistic Quantum Molecular Dynamics transport model. In this microscopic model, one can optionally include a macroscopic intermediate hydrodynamic phase. The hot and dense stage of the collision is then modeled by a hydrodynamical calculation. Photon emission from transport-hydro hybrid calculations is examined for purely hadronic matter and matter that has a cross-over phase transition and a critical end point to deconfined and chirally restored matter at high temperatures. We find the photon spectra in both scenarios to be dominated by Bremsstrahlung. Comparing flow of photons in both cases suggests a way to distinguish these two scenarios.
Photon transport in a dissipative chain of nonlinear cavities
Alberto Biella; Leonardo Mazza; Iacopo Carusotto; Davide Rossini; Rosario Fazio
2015-03-03
We analyze a chain of coupled nonlinear optical cavities driven by a coherent source of light localized at one end and subject to uniform dissipation. We characterize photon transport by studying the populations and the photon correlations as a function of position. When complemented with input-output theory, these quantities provide direct information about photon transmission through the system. The position of single- and multi-photon resonances directly reflect the structure of the many-body energy levels. This shows how a study of transport along a coupled cavity array can provide rich information about the strongly correlated (many-body) states of light even in presence of dissipation. By means of a numerical algorithm based on the time-evolving block decimation scheme adapted to mixed states, we are able to simulate arrays up to sixty cavities.
Robust light transport in non-Hermitian photonic lattices
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
Robust light transport in non-Hermitian photonic lattices.
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-01-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure. PMID:26314932
Robust light transport in non-Hermitian photonic lattices
NASA Astrophysics Data System (ADS)
Longhi, Stefano; Gatti, Davide; Valle, Giuseppe Della
2015-08-01
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport insensitive to disorder or imperfections in the structure. Non-Hermitian asymmetric transport can occur in tight-binding lattices with an imaginary gauge field via a non-Hermitian delocalization transition, and in periodically-driven superlattices. The possibility to observe non-Hermitian delocalization is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.
FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation
Hackel, B M; Nielsen Jr., D E; Procassini, R J
2009-02-25
The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.
Analysis of EBR-II neutron and photon physics by multidimensional transport-theory techniques
Jacqmin, R.P.; Finck, P.J. [Argonne National Lab., IL (United States); Palmiotti, G. [CEA Centre d`Etudes de Cadarache, 13 - Saint-Paul-lez-Durance (France)
1994-03-01
This paper contains a review of the challenges specific to the EBR-II core physics, a description of the methods and techniques which have been developed for addressing these challenges, and the results of some validation studies relative to power-distribution calculations. Numerical tests have shown that the VARIANT nodal code yields eigenvalue and power predictions as accurate as finite difference and discrete ordinates transport codes, at a small fraction of the cost. Comparisons with continuous-energy Monte Carlo results have proven that the errors introduced by the use of the diffusion-theory approximation in the collapsing procedure to obtain broad-group cross sections, kerma factors, and photon-production matrices, have a small impact on the EBR-II neutron/photon power distribution.
NASA Astrophysics Data System (ADS)
Söderberg, Jonas; Alm Carlsson, Gudrun; Ahnesjö, Anders
2003-10-01
When dedicated software is lacking, treatment planning for fast neutron therapy is sometimes performed using dose calculation algorithms designed for photon beam therapy. In this work Monte Carlo derived neutron pencil kernels in water were parametrized using the photon dose algorithm implemented in the Nucletron TMS (treatment management system) treatment planning system. A rectangular fast-neutron fluence spectrum with energies 0-40 MeV (resembling a polyethylene filtered p(41)+ Be spectrum) was used. Central axis depth doses and lateral dose distributions were calculated and compared with the corresponding dose distributions from Monte Carlo calculations for homogeneous water and heterogeneous slab phantoms. All absorbed doses were normalized to the reference dose at 10 cm depth for a field of radius 5.6 cm in a 30 × 40 × 20 cm3 water test phantom. Agreement to within 7% was found in both the lateral and the depth dose distributions. The deviations could be explained as due to differences in size between the test phantom and that used in deriving the pencil kernel (radius 200 cm, thickness 50 cm). In the heterogeneous phantom, the TMS, with a directly applied neutron pencil kernel, and Monte Carlo calculated absorbed doses agree approximately for muscle but show large deviations for media such as adipose or bone. For the latter media, agreement was substantially improved by correcting the absorbed doses calculated in TMS with the neutron kerma factor ratio and the stopping power ratio between tissue and water. The multipurpose Monte Carlo code FLUKA was used both in calculating the pencil kernel and in direct calculations of absorbed dose in the phantom.
SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
NASA Astrophysics Data System (ADS)
Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.
2014-03-01
Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.
Taylor, Michael; Dunn, Leon; Kron, Tomas; Height, Felicity; Franich, Rick
2012-04-01
Prediction of dose distributions in close proximity to interfaces is difficult. In the context of radiotherapy of lung tumors, this may affect the minimum dose received by lesions and is particularly important when prescribing dose to covering isodoses. The objective of this work is to quantify underdosage in key regions around a hypothetical target using Monte Carlo dose calculation methods, and to develop a factor for clinical estimation of such underdosage. A systematic set of calculations are undertaken using 2 Monte Carlo radiation transport codes (EGSnrc and GEANT4). Discrepancies in dose are determined for a number of parameters, including beam energy, tumor size, field size, and distance from chest wall. Calculations were performed for 1-mm{sup 3} regions at proximal, distal, and lateral aspects of a spherical tumor, determined for a 6-MV and a 15-MV photon beam. The simulations indicate regions of tumor underdose at the tumor-lung interface. Results are presented as ratios of the dose at key peripheral regions to the dose at the center of the tumor, a point at which the treatment planning system (TPS) predicts the dose more reliably. Comparison with TPS data (pencil-beam convolution) indicates such underdosage would not have been predicted accurately in the clinic. We define a dose reduction factor (DRF) as the average of the dose in the periphery in the 6 cardinal directions divided by the central dose in the target, the mean of which is 0.97 and 0.95 for a 6-MV and 15-MV beam, respectively. The DRF can assist clinicians in the estimation of the magnitude of potential discrepancies between prescribed and delivered dose distributions as a function of tumor size and location. Calculation for a systematic set of 'generic' tumors allows application to many classes of patient case, and is particularly useful for interpreting clinical trial data.
Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K
2011-11-01
The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries. PMID:21147784
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Knezevic, Irena
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons S.1063/1.4861410 Ballistic phonon thermal conductance in graphene nanoribbons J. Vac. Sci. Technol. B 31, 04D104 (2013); 10.1116/1.4804617 Phonon limited transport in graphene nanoribbon field effect transistors using full three dimensional
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Photon transport enhanced by transverse Anderson localization in disordered superlattices
NASA Astrophysics Data System (ADS)
Hsieh, P.; Chung, C.; McMillan, J. F.; Tsai, M.; Lu, M.; Panoiu, N. C.; Wong, C. W.
2015-03-01
Controlling the flow of light at subwavelength scales provides access to functionalities such as negative or zero index of refraction, transformation optics, cloaking, metamaterials and slow light, but diffraction effects severely restrict our ability to control light on such scales. Here we report the photon transport and collimation enhanced by transverse Anderson localization in chip-scale dispersion-engineered anisotropic media. We demonstrate a photonic crystal superlattice structure in which diffraction is nearly completely arrested by cascaded resonant tunnelling through transverse guided resonances. By modifying the geometry of more than 4,000 scatterers in the superlattices we add structural disorder controllably and uncover the mechanism of disorder-induced transverse localization. Arrested spatial divergence is captured in the power-law scaling, along with exponential asymmetric mode profiles and enhanced collimation bandwidths for increasing disorder. With increasing disorder, we observe the crossover from cascaded guided resonances into the transverse localization regime, beyond both the ballistic and diffusive transport of photons.
Robust light transport in non-Hermitian photonic lattices
Stefano Longhi; Davide Gatti; Giuseppe Della Valle
2015-07-24
Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the {\\it non-Hermitian} nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport that is rather insensitive to disorder or imperfections in the structure. Non-Hermitian transport in two lattice models is considered: a tight-binding lattice with an imaginary gauge field (Hatano-Nelson model), and a non-Hermitian driven binary lattice. In the former case transport in spite of disorder is ensured by a mobility edge that arises because of a non-Hermitian delocalization transition. The possibility to observe non-Hermitian delocalization induced by a synthetic 'imaginary' gauge field is suggested using an engineered coupled-resonator optical waveguide (CROW) structure.
NASA Astrophysics Data System (ADS)
Boas, David A.; Culver, J. P.; Stott, J. J.; Dunn, A. K.
2002-02-01
We describe a novel Monte Carlo code for photon migration through 3D media with spatially varying optical properties. The code is validated against analytic solutions of the photon diffusion equation for semi-infinite homogeneous media. The code is also cross-validated for photon migration through a slab with an absorbing heterogeneity. A demonstration of the utility of the code is provided by showing time-resolved photon migration through a human head. This code, known as ‘tMCimg’, is available on the web and can serve as a resource for solving the forward problem for complex 3D structural data obtained by MRI or CT.
Fang, Qianqian; Boas, David A
2009-10-26
We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging. PMID:19997242
Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.
Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B
2014-11-01
The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and ?-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The ?-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967
Monte Carlo-based energy response studies of diode dosimeters in radiotherapy photon beams.
Arun, C; Palani Selvam, T; Dinkar, Verma; Munshi, Prabhat; Kalra, Manjit Singh
2013-01-01
This study presents Monte Carlo-calculated absolute and normalized (relative to a (60)Co beam) sensitivity values of silicon diode dosimeters for a variety of commercially available silicon diode dosimeters for radiotherapy photon beams in the energy range of (60)Co-24 MV. These values were obtained at 5 cm depth along the central axis of a water-equivalent phantom of 10 cm × 10 cm field size. The Monte Carlo calculations were based on the EGSnrc code system. The diode dosimeters considered in the calculations have different buildup materials such as aluminum, brass, copper, and stainless steel + epoxy. The calculated normalized sensitivity values of the diode dosimeters were then compared to previously published measured values for photon beams at (60)Co-20 MV. The comparison showed reasonable agreement for some diode dosimeters and deviations of 5-17 % (17 % for the 3.4 mm brass buildup case for a 10 MV beam) for some diode dosimeters. Larger deviations of the measurements reflect that these models of the diode dosimeter were too simple. The effect of wall materials on the absorbed dose to the diode was studied and the results are presented. Spencer-Attix and Bragg-Gray stopping power ratios (SPRs) of water-to-diode were calculated at 5 cm depth in water. The Bragg-Gray SPRs of water-to-diode compare well with Spencer-Attix SPRs for ? = 100 keV and above at all beam qualities. PMID:23180010
Damien Querlioz; Huu-Nha Nguyen; Jérôme Saint-Martin; Arnaud Bournel; Sylvie Galdin-Retailleau; Philippe Dollfus
2009-01-01
In this paper, we review and extend our recent works based on the Monte Carlo method to solve the Wigner-Boltzmann transport\\u000a equation and model semiconductor nanodevices. After presenting the different possible approaches to quantum mechanical modelling,\\u000a the formalism and the theoretical framework are described together with the particle Monte Carlo implementation using a technique\\u000a fully compatible with semiclassical simulation. Examples
Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.
2013-01-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
Iandola, F N; O'Brien, M J; Procassini, R J
2010-11-29
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
Detailed calculation of inner-shell impact ionization to use in photon transport codes
NASA Astrophysics Data System (ADS)
Fernandez, Jorge E.; Scot, Viviana; Verardi, Luca; Salvat, Francesc
2014-02-01
Secondary electrons can modify the intensity of the XRF characteristic lines by means of a mechanism known as inner-shell impact ionization (ISII). The ad-hoc code KERNEL (which calls the PENELOPE package) has been used to characterize the electron correction in terms of angular, spatial and energy distributions. It is demonstrated that the angular distribution of the characteristic photons due to ISII can be safely considered as isotropic, and that the source of photons from electron interactions is well represented as a point source. The energy dependence of the correction is described using an analytical model in the energy range 1-150 keV, for all the emission lines (K, L and M) of the elements with atomic numbers Z=11-92. It is introduced a new photon kernel comprising the correction due to ISII, suitable to be adopted in photon transport codes (deterministic or Monte Carlo) with a minimal effort. The impact of the correction is discussed for the most intense K (K?1,K?2,K?1) and L (L?1,L?2) lines.
NASA Astrophysics Data System (ADS)
Peterson, J. R.; Jernigan, J. G.; Kahn, S. M.; Rasmussen, A. P.; Peng, E.; Ahmad, Z.; Bankert, J.; Chang, C.; Claver, C.; Gilmore, D. K.; Grace, E.; Hannel, M.; Hodge, M.; Lorenz, S.; Lupu, A.; Meert, A.; Nagarajan, S.; Todd, N.; Winans, A.; Young, M.
2015-05-01
We present a comprehensive methodology for the simulation of astronomical images from optical survey telescopes. We use a photon Monte Carlo approach to construct images by sampling photons from models of astronomical source populations, and then simulating those photons through the system as they interact with the atmosphere, telescope, and camera. We demonstrate that all physical effects for optical light that determine the shapes, locations, and brightnesses of individual stars and galaxies can be accurately represented in this formalism. By using large scale grid computing, modern processors, and an efficient implementation that can produce 400,000 photons s?1, we demonstrate that even very large optical surveys can be now be simulated. We demonstrate that we are able to (1) construct kilometer scale phase screens necessary for wide-field telescopes, (2) reproduce atmospheric point-spread function moments using a fast novel hybrid geometric/Fourier technique for non-diffraction limited telescopes, (3) accurately reproduce the expected spot diagrams for complex aspheric optical designs, and (4) recover system effective area predicted from analytic photometry integrals. This new code, the Photon Simulator (PhoSim), is publicly available. We have implemented the Large Synoptic Survey Telescope design, and it can be extended to other telescopes. We expect that because of the comprehensive physics implemented in PhoSim, it will be used by the community to plan future observations, interpret detailed existing observations, and quantify systematics related to various astronomical measurements. Future development and validation by comparisons with real data will continue to improve the fidelity and usability of the code.
Photon-Inhibited Topological Transport in Quantum Well Heterostructures
NASA Astrophysics Data System (ADS)
Farrell, Aaron; Pereg-Barnea, T.
2015-09-01
Here we provide a picture of transport in quantum well heterostructures with a periodic driving field in terms of a probabilistic occupation of the topologically protected edge states in the system. This is done by generalizing methods from the field of photon-assisted tunneling. We show that the time dependent field dresses the underlying Hamiltonian of the heterostructure and splits the system into sidebands. Each of these sidebands is occupied with a certain probability which depends on the drive frequency and strength. This leads to a reduction in the topological transport signatures of the system because of the probability to absorb or emit a photon. Therefore when the voltage is tuned to the bulk gap the conductance is smaller than the expected 2 e2/h . We refer to this as photon-inhibited topological transport. Nevertheless, the edge modes reveal their topological origin in the robustness of the edge conductance to disorder and changes in model parameters. In this work the analogy with photon-assisted tunneling allows us to interpret the calculated conductivity and explain the sum rule observed by Kundu and Seradjeh.
Carlo Jacoboni; Lino Reggiani
1983-01-01
This review presents in a comprehensive and tutorial form the basic principles of the Monte Carlo method, as applied to the solution of transport problems in semiconductors. Sufficient details of a typical Monte Carlo simulation have been given to allow the interested reader to create his own Monte Carlo program, and the method has been briefly compared with alternative theoretical
Experimental investigation of a fast Monte Carlo photon beam dose calculation algorithm.
Fippel, M; Laub, W; Huber, B; Nüsslin, F
1999-12-01
An experimental verification of the recently developed XVMC code, a fast Monte Carlo algorithm to calculate dose distributions of photon beams in treatment planning, is presented. The treatment head is modelled by a point source with energy distribution (primary photons) and an additional head scatter contribution. Utility software is presented, allowing the determination of the parameters for this model using a single measured depth dose curve in water. The simple beam model is considered to be a starting point for more complex models being planned for future versions of the code. This paper is mainly focused on the influence of the different techniques on variance reduction and material property determination for dose distributions. It is demonstrated that XVMC and the simple beam model reproduce measured (by a diamond detector) relative dose distributions with an accuracy of better than +/-2% in various homogeneous and inhomogeneous phantoms. Furthermore, relative dose distributions in solid state phantoms have been measured by film. Also for these cases, measured and calculated dose distributions agree within experimental uncertainty. The short calculation time (depending on voxel resolution, statistical accuracy, field size and energy, a span of 1 min to 1 h using a present-day personal computer) and an interface to a commercial planning system will allow the implementation of the code for routine treatment planning of clinical electron and photon beams. PMID:10616153
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.
Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-01
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area. PMID:17183143
Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries
Bal, Guillaume
in the atmosphere and the ocean [4, 12, 14], neutron transport [6, 16], as well as the propagation of seismic waves encountered in remote sensing. Photons scatter and are absorbed with prescribed probability depending the variance of each shot. See [16, 13] or the review of more recent work (on neutron transport) in [9
Bishop, Martin J.; Plank, Gernot
2014-01-01
Light scattering during optical imaging of electrical activation within the heart is known to significantly distort the optically-recorded action potential (AP) upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modeling approaches based on the photon diffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such as small cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC) approaches allow simulation and tracking of individual photon “packets” as they propagate through tissue with differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals within unstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, including representations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct “humped” morphology. Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with “virtual-electrode” regions of strong de-/hyper-polarized tissue surrounding cavities during shocks, significantly reducing the apparent optically-measured epicardial polarization. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity. PMID:25309442
Advanced Monte Carlo methods for thermal radiation transport
NASA Astrophysics Data System (ADS)
Wollaber, Allan B.
During the past 35 years, the Implicit Monte Carlo (IMC) method proposed by Fleck and Cummings has been the standard Monte Carlo approach to solving the thermal radiative transfer (TRT) equations. However, the IMC equations are known to have accuracy limitations that can produce unphysical solutions. In this thesis, we explicitly provide the IMC equations with a Monte Carlo interpretation by including particle weight as one of its arguments. We also develop and test a stability theory for the 1-D, gray IMC equations applied to a nonlinear problem. We demonstrate that the worst case occurs for 0-D problems, and we extend the results to a stability algorithm that may be used for general linearizations of the TRT equations. We derive gray, Quasidiffusion equations that may be deterministically solved in conjunction with IMC to obtain an inexpensive, accurate estimate of the temperature at the end of the time step. We then define an average temperature T* to evaluate the temperature-dependent problem data in IMC, and we demonstrate that using T* is more accurate than using the (traditional) beginning-of-time-step temperature. We also propose an accuracy enhancement to the IMC equations: the use of a time-dependent "Fleck factor". This Fleck factor can be considered an automatic tuning of the traditionally defined user parameter alpha, which generally provides more accurate solutions at an increased cost relative to traditional IMC. We also introduce a global weight window that is proportional to the forward scalar intensity calculated by the Quasidiffusion method. This weight window improves the efficiency of the IMC calculation while conserving energy. All of the proposed enhancements are tested in 1-D gray and frequency-dependent problems. These enhancements do not unconditionally eliminate the unphysical behavior that can be seen in the IMC calculations. However, for fixed spatial and temporal grids, they suppress them and clearly work to make the solution more accurate. Overall, the work presented represents first steps along several paths that can be taken to improve the Monte Carlo simulations of TRT problems.
Densmore, Jeffery D., E-mail: jdd@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Thompson, Kelly G., E-mail: kgt@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Urbatsch, Todd J., E-mail: tmonster@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States)
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
LDRD project 151362 : low energy electron-photon transport.
Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James
2013-09-01
At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.
Multidimensional electron-photon transport with standard discrete ordinates codes
Drumm, C.R.
1995-12-31
A method is described for generating electron cross sections that are compatible with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electronphoton transport problems.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M. [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 1616 Genova (Italy)
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Hariharan Subramanian; Prabhakar Pradhan; Young L. Kim; Yang Liu; Xu Li; Vadim Backman
2005-09-01
Constructive interference among coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light $\\lambda$ to the transport mean free path length ls* of a random medium. In biological media, large ls* ~ 0.5-2 mm >> $\\lambda$ results in an extremely small (~0.001 $^\\degree$) angular width of the EBS cone making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lscphoton random walk model of LEBS using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in diffusion regime. We show that small exit angles are highly sensitive to low order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with experimental data.
MCML—Monte Carlo modeling of light transport in multi-layered tissues
Lihong Wang; Steven L. Jacques; Liqiong Zheng
1995-01-01
A Monte Carlo model of steady-state light transport in multi-layered tissues (MCML) has been coded in ANSI Standard C; therefore, the program can be used on various computers. Dynamic data allocation is used for MCML, hence the number of tissue layers and grid elements of the grid system can be varied by users at run time. The coordinates of the
Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
G. M. de Bellefon; B. D. Wirth
2011-01-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag,
A Photon Transport Problem with a Time-Dependent Point Source
Mottram, Nigel
A Photon Transport Problem with a Time-Dependent Point Source A Belleni-Morante Dipartimento di 2007 Abstract We consider a time-dependent problem of photon transport in an interstellar cloud with a point photon source modelled by a Dirac functional. The existence of a unique distributional solution
Photon transport in a time-dependent 3D region Aldo Belleni-Morante1
Ceragioli, Francesca
Photon transport in a time-dependent 3D region Aldo Belleni-Morante1 , Roberto Monaco2 , Sandra roberto.monaco@polito.it, sandra.pieraccini@polito.it Abstract Photon transport is studied that the time behaviour of V (t) is unknown and the photon distribution function is measured at a location far
Update On the Status of the FLUKA Monte Carlo Transport Code*
NASA Technical Reports Server (NTRS)
Ferrari, A.; Lorenzo-Sentis, M.; Roesler, S.; Smirnov, G.; Sommerer, F.; Theis, C.; Vlachoudis, V.; Carboni, M.; Mostacci, A.; Pelliccioni, M.
2006-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. We review the progress achieved since the last CHEP Conference on the physics models, some technical improvements to the code and some recent applications. From the point of view of the physics, improvements have been made with the extension of PEANUT to higher energies for p, n, pi, pbar/nbar and for nbars down to the lowest energies, the addition of the online capability to evolve radioactive products and get subsequent dose rates, upgrading of the treatment of EM interactions with the elimination of the need to separately prepare preprocessed files. A new coherent photon scattering model, an updated treatment of the photo-electric effect, an improved pair production model, new photon cross sections from the LLNL Cullen database have been implemented. In the field of nucleus-- nucleus interactions the electromagnetic dissociation of heavy ions has been added along with the extension of the interaction models for some nuclide pairs to energies below 100 MeV/A using the BME approach, as well as the development of an improved QMD model for intermediate energies. Both DPMJET 2.53 and 3 remain available along with rQMD 2.4 for heavy ion interactions above 100 MeV/A. Technical improvements include the ability to use parentheses in setting up the combinatorial geometry, the introduction of pre-processor directives in the input stream. a new random number generator with full 64 bit randomness, new routines for mathematical special functions (adapted from SLATEC). Finally, work is progressing on the deployment of a user-friendly GUI input interface as well as a CAD-like geometry creation and visualization tool. On the application front, FLUKA has been used to extensively evaluate the potential space radiation effects on astronauts for future deep space missions, the activation dose for beam target areas, dose calculations for radiation therapy as well as being adapted for use in the simulation of events in the ALICE detector at the LHC.
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
NASA Astrophysics Data System (ADS)
Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.
2014-06-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.
Monte Carlo simulation of kilovolt electron transport in solids
NASA Astrophysics Data System (ADS)
Martínez, J. D.; Mayol, R.; Salvat, F.
1990-03-01
A Monte Carlo procedure to simulate the penetration and energy loss of low-energy electron beams through solids is presented. Elastic collisions are described by using the method of partial waves for the screened Coulomb field of the nucleus. The atomic charge density is approximated by an analytical expression with parameters determined from the Dirac-Hartree-Fock-Slater self-consistent density obtained under Wigner-Seitz boundary conditions in order to account for solid-state effects; exchange effects are also accounted for by an energy-dependent local correction. Elastic differential cross sections are then easily computed by combining the WKB and Born approximations to evaluate the phase shifts. Inelastic collisions are treated on the basis of a generalized oscillator strength model which gives inelastic mean free paths and stopping powers in good agreement with experimental data. This scattering model is accurate in the energy range from a few hundred eV up to about 50 keV. The reliability of the simulation method is analyzed by comparing simulation results and experimental data from backscattering and transmission measurements.
Agyingi, Ephraim O; Mobit, Paul N; Sandison, George A
2006-01-01
A Monte Carlo study of the energy response of an aluminium oxide (Al(2)O(3)) detector in kilovoltage and megavoltage photon beams relative to (60)Co gamma rays has been performed using EGSnrc Monte Carlo simulations. The sensitive volume of the Al(2)O(3) detector was simulated as a disc of diameter 2.85 mm and thickness 1 mm. The phantom material was water and the irradiation depth chosen was 2.0 cm in kilovoltage photon beams and 5.0 cm in megavoltage photon beams. The results show that the energy response of the Al(2)O(3) detector is constant within 3% for photon beam energies in the energy range of (60)Co gamma rays to 25 MV X rays. However, the Al(2)O(3) detector shows an enhanced energy response for kilovoltage photon beams, which in the case of 50 kV X rays is 3.2 times higher than that for (60)Co gamma rays. There is essentially no difference in the energy responses of LiF and Al(2)O(3) detectors irradiated in megavoltage photon beams when these Al(2)O(3) results are compared with literature data for LiF thermoluminescence detectors. However, the Al(2)O(3) detector has a much higher enhanced response compared with LiF detectors in kilovoltage X-ray beams, more than twice as much for the case of 50 kV X rays. PMID:16046555
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.
Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi
2009-07-01
The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k(pl) for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c(pl) and a chamber-dependent fluence correction factor h(pl) for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h(pl) values for the plane-parallel chambers were evaluated from the electron fluence correction factor phi(pl)w and wall correction factors P(wall,w) and P(wall,pl) for a combination of water or plastic materials. The calculated k(pl) and h(pl) values were verified by comparison with the measured values. A set of k(pl) values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k(pl) values also agreed within their combined uncertainty with the measured data. For electron beams, the c(pl) values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The phi(pl)w values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P(wall) between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h(pl) values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h(pl) values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry. PMID:19673198
Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
NASA Astrophysics Data System (ADS)
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport
NASA Astrophysics Data System (ADS)
Robinson, P. B.; Peterson, J. D. L.
2005-12-01
The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48
Data decomposition of Monte Carlo particle transport simulations via tally servers
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit; Smith, Kord
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
Hypersensitive Transport in Photonic Crystals with Accidental Spatial Degeneracies
Makri, Eleana; Chabanov, Andrey; Vitebskiy, Ilya; Kottos, Tsampikos
2015-01-01
A localized defect mode in a photonic-layered structure develops nodal points. Placing a thin metallic layer at such a nodal point results in the phenomenon of induced transparency. We demonstrate that if this nodal point is not a point of symmetry, then even a tiny alteration of the permittivity in the vicinity of the defect suppresses the localized mode along with the resonant transmission; the layered structure becomes highly reflective within a broad frequency range. Applications of this hypersensitive transport for optical limiting and switching are discussed.
Almberg, Sigrun Saur; Frengen, Jomar; Kylling, Arve; Lindmo, Tore
2012-01-15
Purpose: To individually benchmark the incident electron parameters in a Monte Carlo model of an Elekta linear accelerator operating at 6 and 15 MV. The main objective is to establish a simplified but still precise benchmarking procedure that allows accurate dose calculations of advanced treatment techniques. Methods: The EGSnrc Monte Carlo user codes BEAMnrc and DOSXYZnrc are used for photon beam simulations and dose calculations, respectively. A 5 x 5 cm{sup 2} field is used to determine both the incident electron energy and the electron radial intensity. First, the electron energy is adjusted to match the calculated depth dose to the measured one. Second, the electron radial intensity is adjusted to make the calculated dose profile in the penumbrae region match the penumbrae measured by GafChromic EBT film. Finally, the mean angular spread of the incident electron beam is determined by matching calculated and measured cross-field profiles of large fields. The beam parameters are verified for various field sizes and shapes. Results: The penumbrae measurements revealed a non-circular electron radial intensity distribution for the 6 MV beam, while a circular electron radial intensity distribution could best describe the 15 MV beam. These electron radial intensity distributions, given as the standard deviation of a Gaussian distribution, were found to be 0.25 mm (in-plane) and 1.0 mm (cross-plane) for the 6 MV beam and 0.5 mm (both in-plane and cross-plane) for the 15 MV beam. Introducing a small mean angular spread of the incident electron beam has a considerable impact on the lateral dose profiles of large fields. The mean angular spread was found to be 0.7 deg. and 0.5 deg. for the 6 and 15 MV beams, respectively. Conclusions: The incident electron beam parameters in a Monte Carlo model of a linear accelerator could be precisely and independently determined by the benchmarking procedure proposed. As the dose distribution in the penumbra region is insensitive to moderate changes in electron energy and angular spread, accurate penumbra measurements is feasible for benchmarking the electron radial intensity distribution. This parameter is particularly important for accurate dosimetry of mlc-shaped fields and small fields.
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901
MONTE CARLO PARTICLE TRANSPORT IN MEDIA WITH EXPONENTIALLY VARYING TIME-DEPENDENT CROSS-SECTIONS
F. BROWN; W. MARTIN
2001-02-01
A probability density function (PDF) and random sampling procedure for the distance to collision were derived for the case of exponentially varying cross-sections. Numerical testing indicates that both are correct. This new sampling procedure has direct application in a new method for Monte Carlo radiation transport, and may be generally useful for analyzing physical problems where the material cross-sections change very rapidly in an exponential manner.
A bounce-averaged Monte Carlo collision operator and ripple transport in a tokamak
Albert, J.M.; Boozer, A.H.
1986-09-01
A bounce-averaged Monte Carlo operator is presented that simulates bounce-averaged perturbative Lorentz pitch angle scattering of particles in toroidal plasmas, in particular a tokamak. In conjunction with bounce-averaged expressions for the deterministic motion, this operator allows a quick and inexpensive simulation on time scales long compared to a bounce time. An analytically tractable model of transport due to toroidal magnetic field ripple is described.
NASA Astrophysics Data System (ADS)
Doronin, Alex; Meglinski, Igor
2011-03-01
The advantages of using method Monte Carlo for simulation of radiative transfer in complex turbid random media like biological tissues are well recognized. However, in most practical applications the wave nature of probing optical radiation is ignored, and its propagation is considered in terms of neutral particles, so-called photon packets. Nevertheless, when the interference, polarization or coherent effects of scattering of optical/laser radiation constitute the fundamental principle of a particular optical technique the wave nature of optical radiation must be considered and taken into account. In current report we present the state-of-the art and the development prospects for application of method Monte Carlo to the point of taking into account the wave properties of optical radiation. We also introduce a novel Object-Oriented Programming (OOP) paradigm accelerated by Graphics Processing Unit that provides an opportunity to escalate the performance of standard Monte Carlo simulation up to 100 times.
Hanazaki, Natalia
Programa de Pós-Graduação em Engenharia de Transportes e Gestão Territorial PPGTG Mestrado PPGTG 2015 A Coordenadoria do PPGTG - Programa de Pós-Graduação em Engenharia de Transportes e Gestão Prof. Carlos Loch Coordenador do Programa de Pós-Graduação em Engenharia de Transportes e Gestão
James C L Chow; Runqing Jiang
2012-01-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations.
Multidimensional electron-photon transport with standard discrete ordinates codes
Drumm, C.R.
1997-04-01
A method is described for generating electron cross sections that are comparable with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electron-photon transport problems. The key to the method is a simultaneous solution of the continuous-slowing-down (CSD) portion and elastic-scattering portion of the scattering source by the Goudsmit-Saunderson theory. The resulting multigroup-Legendre cross sections are much smaller than the true scattering cross sections that they represent. Under certain conditions, the cross sections are guaranteed positive and converge with a low-order Legendre expansion.
Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
NASA Astrophysics Data System (ADS)
Bošnjakovi?, D.; Petrovi?, Z. Lj; White, R. D.; Dujko, S.
2014-10-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C2H2F4, iso-C4H10 and SF6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C2H2F4/iso-C4H10/SF6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena.
Development of A Monte Carlo Radiation Transport Code System For HEDS: Status Update
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Gabriel, Tony A.; Miller, Thomas M.
2003-01-01
Modifications of the Monte Carlo radiation transport code HETC are underway to extend the code to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. The new HETC code will be available for use in radiation shielding applications associated with missions, such as the proposed manned mission to Mars. In this work the current status of code modification is described. Methods used to develop the required nuclear reaction models, including total, elastic and nuclear breakup processes, and their associated databases are also presented. Finally, plans for future work on the extended HETC code system and for its validation are described.
NASA Astrophysics Data System (ADS)
Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.
2010-08-01
This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
Verhaegen, Frank
2002-05-21
High atomic number (Z) heterogeneities in tissue exposed to photons with energies of up to about 1 MeV can cause significant dose perturbations in their immediate vicinity. The recently released Monte Carlo (MC) code EGSnrc (Kawrakow 2000a Med. Phys. 27 485-98) was used to investigate the dose perturbation of high-Z heterogeneities in tissue in kilovolt (kV) and 60Co photon beams. Simulations were performed of measurements with a dedicated thin-window parallel-plate ion chamber near a high-Z interface in a 60Co photon beam (Nilsson et al 1992 Med. Phys. 19 1413-21). Good agreement was obtained between simulations and measurements for a detailed set of experiments in which the thickness of the ion chamber window, the thickness of the air gap between ion chamber and heterogeneity, the depth of the ion chamber in polystyrene and the material of the interface was varied. The EGSnrc code offers several improvements in the electron and photon production and transport algorithms over the older EGS4/PRESTA code (Nelson et al 1985 Stanford Linear Accelerator Center Report SLAC-265. Bielajew and Rogers 1987 Nucl. Instrum. Methods Phys. Res. B 18 165-81). The influence of the new EGSnrc features was investigated for simulations of a planar slab of a high-Z medium embedded in water and exposed to kV or 60Co photons. It was found that using the new electron transport algorithm in EGSnrc, including relativistic spin effects in elastic scattering, significantly affects the calculation of dose distribution near high-Z interfaces. The simulations were found to be independent of the maximum fractional electron energy loss per step (ESTEPE), which was often a cause for concern in older EGS4 simulations. Concerning the new features of the photon transport algorithm sampling of the photoelectron angular distribution was found to have a significant effect, whereas the effect of binding energies in Compton scatter was found to be negligible. A slight dose artefact very close to high-Z interfaces exposed to kilovolt x-rays was discovered when atomic relaxation processes following excitation were omitted. PMID:12069087
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
Lee, S.R.; Cummings, J.C.; Nolen, S.D. |
1997-05-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.
NASA Astrophysics Data System (ADS)
Sheikh-Bagheri, Daryoush
1999-12-01
BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.
Wagner, J.C.; Haghighat, A.; Petrovic, B.G.; Hanshaw, H.L.
1995-12-31
Monte Carlo calculations of pressure vessel (PV) neutron fluence have been performed to benchmark discrete ordinates (S{sub N}) transport methods. These calculations, along with measured data at the ex-vessel cavity dosimeter, provide a means to examine various uncertainties associated with the S{sub N} transport calculations. For the purpose of the PV fluence calculations, synthesized 3-D deterministic models are shown to produce results that are quite comparable to results from Monte Carlo methods, provided the two methods utilize the same multigroup cross section libraries. Differences between continuous energy Monte Carlo and multigroup S{sub N} calculations are analyzed and discussed.
Multigroup Boltzmann Fokker Planck electron-photon transport capability in MCNP{sup trademark}
Adams, K.J.; Hart, M.
1995-07-01
The MCNP code system has a robust multigroup transport capability which includes a multigroup Boltzmann-Fokker-Planck (MGBFP) transport algorithm to perform coupled electron-photon or other coupled charged and neutral particle transport in either a forward or adjoint mode. This paper will discuss this capability and compare code results with other transport codes.
Effective source term in the diffusion equation for photon transport in turbid media
Fantini, Sergio
Effective source term in the diffusion equation for photon transport in turbid media Sergio Fantini used to describe photon transport in turbid media. We have performed a series of spectroscopy experiments on a number of uniform turbid media with different optical properties absorption coefficient
Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.
Czarnecki, D; Zink, K
2013-04-21
The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors ?(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size. PMID:23514734
Wright, Tracy; Lye, Jessica E; Ramanathan, Ganesan; Harty, Peter D; Oliver, Chris; Webb, David V; Butler, Duncan J
2015-01-21
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a (60)Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method. PMID:25565406
NASA Astrophysics Data System (ADS)
Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2013-03-01
The ?-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the ?-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate ?-index values when existing in the reference dose distribution and underestimate ?-index values when existing in the evaluation dose distribution given the original ?-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a ?-index test between an MC reference dose and a non-MC evaluation dose, the average ?-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a ?-index test between a non-MC reference dose and an MC evaluation dose, the average ?-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a ?-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the ?-index test should be used with caution when comparing dose distributions computed with MC simulation.
NASA Astrophysics Data System (ADS)
Wright, Tracy; Lye, Jessica E.; Ramanathan, Ganesan; Harty, Peter D.; Oliver, Chris; Webb, David V.; Butler, Duncan J.
2015-01-01
The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4–1.1% across the range of calibration energies compared to the current calibration method.
Ion beam transport in tissue-like media using the Monte Carlo code SHIELD-HIT.
Gudowska, Irena; Sobolevsky, Nikolai; Andreo, Pedro; Belki?, Dzevad; Brahme, Anders
2004-05-21
The development of the Monte Carlo code SHIELD-HIT (heavy ion transport) for the simulation of the transport of protons and heavier ions in tissue-like media is described. The code SHIELD-HIT, a spin-off of SHIELD (available as RSICC CCC-667), extends the transport of hadron cascades from standard targets to that of ions in arbitrary tissue-like materials, taking into account ionization energy-loss straggling and multiple Coulomb scattering effects. The consistency of the results obtained with SHIELD-HIT has been verified against experimental data and other existing Monte Carlo codes (PTRAN, PETRA), as well as with deterministic models for ion transport, comparing depth distributions of energy deposition by protons, 12C and 20Ne ions impinging on water. The SHIELD-HIT code yields distributions consistent with a proper treatment of nuclear inelastic collisions. Energy depositions up to and well beyond the Bragg peak due to nuclear fragmentations are well predicted. Satisfactory agreement is also found with experimental determinations of the number of fragments of a given type, as a function of depth in water, produced by 12C and 14N ions of 670 MeV u(-1), although less favourable agreement is observed for heavier projectiles such as 16O ions of the same energy. The calculated neutron spectra differential in energy and angle produced in a mimic of a Martian rock by irradiation with 12C ions of 290 MeV u(-1) also shows good agreement with experimental data. It is concluded that a careful analysis of stopping power data for different tissues is necessary for radiation therapy applications, since an incorrect estimation of the position of the Bragg peak might lead to a significant deviation from the prescribed dose in small target volumes. The results presented in this study indicate the usefulness of the SHIELD-HIT code for Monte Carlo simulations in the field of light ion radiation therapy. PMID:15214534
Vos, Willem L.
X-ray Diffraction of Photonic Colloidal Single Crystals Willem L. Vos,*, Mischa Megens, Carlos M of Bragg peaks of photonic colloidal single crystals by synchrotron small angle X-ray scattering (SAXS). We find that charge-stabilized colloids form face-centered cubic crystals at all densities up to 60 vol
Light transport and lasing in complex photonic structures
NASA Astrophysics Data System (ADS)
Liew, Seng Fatt
Complex photonic structures refer to composite optical materials with dielectric constant varying on length scales comparable to optical wavelengths. Light propagation in such heterogeneous composites is greatly different from homogeneous media due to scattering of light in all directions. Interference of these scattered light waves gives rise to many fascinating phenomena and it has been a fast growing research area, both for its fundamental physics and for its practical applications. In this thesis, we have investigated the optical properties of photonic structures with different degree of order, ranging from periodic to random. The first part of this thesis consists of numerical studies of the photonic band gap (PBG) effect in structures from 1D to 3D. From these studies, we have observed that PBG effect in a 1D photonic crystal is robust against uncorrelated disorder due to preservation of long-range positional order. However, in higher dimensions, the short-range positional order alone is sufficient to form PBGs in 2D and 3D photonic amorphous structures (PASS). We have identified several parameters including dielectric filling fraction and degree of order that can be tuned to create a broad isotropic PBG. The largest PBG is produced by the dielectric networks due to local uniformity in their dielectric constant distribution. In addition, we also show that deterministic aperiodic structures (DASs) such as the golden-angle spiral and topological defect structures can support a wide PBG and their optical resonances contain unexpected features compared to those in photonic crystals. Another growing research field based on complex photonic structures is the study of structural color in animals and plants. Previous studies have shown that non-iridescent color can be generated from PASs via single or double scatterings. For better understanding of the coloration mechanisms, we have measured the wavelength-dependent scattering length from the biomimetic samples. Our theoretical modeling and analysis explains why single scattering of light is dominant over multiple scattering in similar biological structures and is responsible for color generation. In collaboration with evolutionary biologists, we examine how closely-related species and populations of butterflies have evolved their structural color. We have used artificial selection on a lab model butterfly to evolve violet color from an ultra-violet brown color. The same coloration mechanism is found in other blue/violet species that have evolved their color in nature, which implies the same evolution path for their nanostructure. While the absorption of light is ubiquitous in nature and in applications, the question remains how absorption modifies the transmission in random media. Therefore, we numerically study the effects of optical absorption on the highest transmission states in a two-dimensional disordered waveguide. Our results show that strong absorption turns the highest transmission channel in random media from diffusive to ballistic-like transport. Finally, we have demonstrated lasing mode selection in a nearly circular semiconductor microdisk laser by shaping the spatial profile of the pump beam. Despite of strong mode overlap, selective pumping suppresses the competing lasing modes by either increasing their thresholds or reducing their power slopes. As a result, we can switch both the lasing frequency and the output direction. This powerful technique can have potential application as an on-chip tunable light source.
A bone composition model for Monte Carlo x-ray transport simulations
Zhou Hu; Keall, Paul J.; Graves, Edward E.
2009-03-15
In the megavoltage energy range although the mass attenuation coefficients of different bones do not vary by more than 10%, it has been estimated that a simple tissue model containing a single-bone composition could cause errors of up to 10% in the calculated dose distribution. In the kilovoltage energy range, the variation in mass attenuation coefficients of the bones is several times greater, and the expected error from applying this type of model could be as high as several hundred percent. Based on the observation that the calcium and phosphorus compositions of bones are strongly correlated with the bone density, the authors propose an analytical formulation of bone composition for Monte Carlo computations. Elemental compositions and densities of homogeneous adult human bones from the literature were used as references, from which the calcium and phosphorus compositions were fitted as polynomial functions of bone density and assigned to model bones together with the averaged compositions of other elements. To test this model using the Monte Carlo package DOSXYZnrc, a series of discrete model bones was generated from this formula and the radiation-tissue interaction cross-section data were calculated. The total energy released per unit mass of primary photons (terma) and Monte Carlo calculations performed using this model and the single-bone model were compared, which demonstrated that at kilovoltage energies the discrepancy could be more than 100% in bony dose and 30% in soft tissue dose. Percentage terma computed with the model agrees with that calculated on the published compositions to within 2.2% for kV spectra and 1.5% for MV spectra studied. This new bone model for Monte Carlo dose calculation may be of particular importance for dosimetry of kilovoltage radiation beams as well as for dosimetry of pediatric or animal subjects whose bone composition may differ substantially from that of adult human bones.
Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study
Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon
2012-03-15
Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.
MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES
Afanasiev, A.; Vainio, R.
2013-08-15
A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.
Particle, momentum and energy conserving Monte Carlo model for ion transport
NASA Astrophysics Data System (ADS)
Christiansen, Jes
2005-10-01
A Monte Carlo model based on guiding centre drift motion and Coulomb collisions has been developed to study collisional ion transport in an axisymmetric tokamak equilibrium. The model features momentum conservation for test particles colliding with the background plasma. Calculations have shown an 8% particle loss rate during an energy confinement time. The Monte Carlo model has been extended with a recycling scheme in order to conserve particles and plasma energy. Recycling of lost particles occurs as neutrals from either a limiter or the SOL. Conservation of energy is enforced by an ad hoc prescription which assigns the lost particle energy E to the next particle in increments E/100000 per step of motion. This prescription is meant to simulate energy gained by the electrons from the axial electric field. Extensive calculations are made to study the resulting density and temperature profiles; these are accumulated from the Monte Carlo test particle motions. The profiles will be compared with the assumed profiles of the background plasma to establish self-consistency.
A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates
NASA Astrophysics Data System (ADS)
Sundqvist, Kyle
2010-03-01
We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.
Monte Carlo simulations of electron transport for electron beam-induced deposition of nanostructures
NASA Astrophysics Data System (ADS)
Salvat-Pujol, Francesc; Jeschke, Harald O.; Valenti, Roser
2013-03-01
Tungsten hexacarbonyl, W(CO)6, is a particularly interesting precursor molecule for electron beam-induced deposition of nanoparticles, since it yields deposits whose electronic properties can be tuned from metallic to insulating. However, the growth of tungsten nanostructures poses experimental difficulties: the metal content of the nanostructure is variable. Furthermore, fluctuations in the tungsten content of the deposits seem to trigger the growth of the nanostructure. Monte Carlo simulations of electron transport have been carried out with the radiation-transport code Penelope in order to study the charge and energy deposition of the electron beam in the deposit and in the substrate. These simulations allow us to examine the conditions under which nanostructure growth takes place and to highlight the relevant parameters in the process.
Monte Carlo Neutrino Transport Through Remnant Disks from Neutron Star Mergers
Richers, S; O'Connor, Evan; Fernandez, Rodrigo; Ott, Christian
2015-01-01
We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the case of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45 degrees from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentiall...
Monte Carlo Simulation of Electron Transport in 4H- and 6H-SiC
Sun, C. C.; You, A. H.; Wong, E. K.
2010-07-07
The Monte Carlo (MC) simulation of electron transport properties at high electric field region in 4H- and 6H-SiC are presented. This MC model includes two non-parabolic conduction bands. Based on the material parameters, the electron scattering rates included polar optical phonon scattering, optical phonon scattering and acoustic phonon scattering are evaluated. The electron drift velocity, energy and free flight time are simulated as a function of applied electric field at an impurity concentration of 1x10{sup 18} cm{sup 3} in room temperature. The simulated drift velocity with electric field dependencies is in a good agreement with experimental results found in literature. The saturation velocities for both polytypes are close, but the scattering rates are much more pronounced for 6H-SiC. Our simulation model clearly shows complete electron transport properties in 4H- and 6H-SiC.
Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A.
2010-10-15
Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.
NASA Astrophysics Data System (ADS)
Müller, Florian; Jenny, Patrick; Daniel, Meyer
2014-05-01
To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.
An implicit Monte Carlo method for simulation of impurity transport in divertor plasma
Suzuki, Akiko; Hayashi, Nobuhiko; Hatayama, Akiyoshi
1997-02-01
A new {open_quotes}implicit{close_quotes} Monte Carlo (IMC) method has been developed to simulate ionization and recombination processes of impurity ions in divertor plasmas. The IMC method takes into account many ionization and recombination processes during a time step {Delta}t. The time step is not limited by a condition, {Delta}t {much_lt} {tau}{sub min} ({tau}{sub min}; the minimum characteristic time of atomic processes), which is forced to be adopted in conventional Monte Carlo methods. We incorporate this method into a one-dimensional impurity transport model. In this transport calculation, impurity ions are followed with the time step about 10 times larger than that used in conventional methods. The average charge state of impurities, (Z), and the radiative cooling rate, L(T{sub e}), are calculated at the electron temperature T{sub e} in divertor plasmas. These results are compared with those obtained from the simple noncoronal model. 10 refs., 7 figs.
Schach Von Wittenau, Alexis E. (Livermore, CA)
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Monte-Carlo simulation of complex vapor-transport systems for RIB applications
NASA Astrophysics Data System (ADS)
Zhang, Y.; Alton, G. D.
2005-12-01
In order to minimize decay losses of short-lived radioactive species at ISOL based RIB facilities, effusive-flow particle transit times between target and ion source must be as short as practically achievable. A Monte-Carlo code has been developed for simulating the effusive-flow of neutral particles through vapor-transport systems independent of materials of construction. The code provides average distance traveled and time information associated with the transit of individual particles through a system. It offers a cost effective and accurate means for arriving at vapor-transport system designs. In this report, the code will be described and results obtained by its use in evaluating several prototype vapor-transport systems using specular reflection, cosine and isotropic particle re-emission about the normal to the surface models following adsorption. Simulation results obtained with an isotropic distribution are in close agreement with experimental measurements of the properties of prototype vapor-transport systems fabricated at the Holifield Radioactive Ion Beam Facility.
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
Neoclassical electron transport calculation by using {delta}f Monte Carlo method
Matsuoka, Seikichi [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); Satake, Shinsuke; Yokoyama, Masayuki [Graduate University for Advanced Studies (SOKENDAI), Toki 509-5292 (Japan); National Institute for Fusion Science, Toki 509-5292 (Japan); Wakasa, Arimitsu; Murakami, Sadayoshi [Department of Nuclear Engineering, Kyoto University, Kyoto 606-8501 (Japan)
2011-03-15
High electron temperature plasmas with steep temperature gradient in the core are obtained in recent experiments in the Large Helical Device [A. Komori et al., Fusion Sci. Technol. 58, 1 (2010)]. Such plasmas are called core electron-root confinement (CERC) and have attracted much attention. In typical CERC plasmas, the radial electric field shows a transition phenomenon from a small negative value (ion root) to a large positive value (electron root) and the radial electric field in helical plasmas are determined dominantly by the ambipolar condition of neoclassical particle flux. To investigate such plasmas' neoclassical transport precisely, the numerical neoclassical transport code, FORTEC-3D [S. Satake et al., J. Plasma Fusion Res. 1, 002 (2006)], which solves drift kinetic equation based on {delta}f Monte Carlo method and has been applied for ion species so far, is extended to treat electron neoclassical transport. To check the validity of our new FORTEC-3D code, benchmark calculations are carried out with GSRAKE [C. D. Beidler et al., Plasma Phys. Controlled Fusion 43, 1131 (2001)] and DCOM/NNW [A. Wakasa et al., Jpn. J. Appl. Phys. 46, 1157 (2007)] codes which calculate neoclassical transport using certain approximations. The benchmark calculation shows a good agreement among FORTEC-3D, GSRAKE and DCOM/NNW codes for a low temperature (T{sub e}(0)=1.0 keV) plasma. It is also confirmed that finite orbit width effect included in FORTEC-3D affects little neoclassical transport even for the low collisionality plasma if the plasma is at the low temperature. However, for a higher temperature (5 keV at the core) plasma, significant difference arises among FORTEC-3D, GSRAKE, and DCOM/NNW. These results show an importance to evaluate electron neoclassical transport by solving the kinetic equation rigorously including effect of finite radial drift for high electron temperature plasmas.
Randeniya, S. D.; Taddei, P. J.; Newhauser, W. D.; Yepes, P.
2010-01-01
Monte Carlo simulations of an ocular treatment beam-line consisting of a nozzle and a water phantom were carried out using MCNPX, GEANT4, and FLUKA to compare the dosimetric accuracy and the simulation efficiency of the codes. Simulated central axis percent depth-dose profiles and cross-field dose profiles were compared with experimentally measured data for the comparison. Simulation speed was evaluated by comparing the number of proton histories simulated per second using each code. The results indicate that all the Monte Carlo transport codes calculate sufficiently accurate proton dose distributions in the eye and that the FLUKA transport code has the highest simulation efficiency. PMID:20865141
Wang, Lilie L. W.; Klein, David; Beddar, A. Sam
2010-10-15
Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level.
3D electro-thermal Monte Carlo study of transport in confined silicon devices
NASA Astrophysics Data System (ADS)
Mohamed, Mohamed Y.
The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non-equilibrium carrier dynamics, a key determinant in device performance. The dependence of short-channel effects and Joule heating on the lateral scaling of the cross-section is specifically explored in this work. Finally, this dissertation studies the basic tradeoff between various n-channel multigate architectures with square cross-sectional lengths ranging from 30 nm to 5 nm are presented.
Palma, Bianey Atriana; Sánchez, Ana Ureba; Salguero, Francisco Javier; Arráns, Rafael; Sánchez, Carlos Míguez; Zurita, Amadeo Walls; Hermida, María Isabel Romero; Leal, Antonio
2012-03-01
The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (D(p)). Heart and ipsilateral lung receiving 5% D(p) and 15% D(p), respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% D(p) and 100% D(p) was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency. PMID:22330241
Monte Carlo modeling of transport in PbSe nanocrystal films
Carbone, I., E-mail: icarbone@ucsc.edu; Carter, S. A. [University of California, Santa Cruz, California 95060 (United States); Zimanyi, G. T. [University of California, Davis, California 95616 (United States)
2013-11-21
A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.
Hu Wenqian; Shin, Yung C.; King, Galen
2010-09-01
Mechanisms of energy transport during ultrashort laser pulses (USLPs) ablation are investigated in this paper. Nonequilibrium electron-transport, material ionization, as well as density change effects, are studied using atomistic models--the molecular dynamics (MD) and Monte Carlo (MC) methods, in addition to the previously studied laser absorption, heat conduction, and stress wave propagation. The target material is treated as consisting of two subsystems: valence-electron system and lattice system. MD method is applied to analyze the motion of atoms while MC method is applied for simulating electron dynamics and multiscattering events between particles. Early-time laser-energy absorption and redistribution as well as later-time material ablation and expansion processes are analyzed. This model is validated in terms of ablation depth, lattice/electron temperature distribution as well as evolution, and plume front velocity, through comparisons with experimental or theoretical results in literature. It is generally believed that the hydrodynamic motion of the ablated material is negligible for USLP but this study shows it is true only for its effect on laser-energy deposition. This study shows that the consideration of hydrodynamic expansion and fast density change in both electron and lattice systems is important for obtaining a reliable energy transport mechanism in the locally heated zone.
Cartesian Meshing Impacts for PWR Assemblies in Multigroup Monte Carlo and Sn Transport
NASA Astrophysics Data System (ADS)
Manalo, K.; Chin, M.; Sjoden, G.
2014-06-01
Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport to calculate quantities of interest, such as flux and eigenvalue in a nuclear reactor. Many 3D parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory (both in deviation of mass and exact geometry representation). For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal corresponds to 2D and volumetric to 3D, respectively). Our initial analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same multigroup problems, we also intend to characterize this linear relationship in discrete ordinates (3D PENTRAN) and discuss issues related to transport cross-comparison. In addition, we discuss auto-conversion techniques with our 3D Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
NASA Astrophysics Data System (ADS)
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
O'Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; James, Scott C
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...
Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access
Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT
2010-01-01
One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.
Monte Carlo simulation study of spin transport in multilayer graphene with Bernal stacking
NASA Astrophysics Data System (ADS)
Misra, Soumya; Ghosh, Bahniman; Nandal, Vikas; Dubey, Lalit
2012-07-01
In this work, we model spin transport in multilayer graphene (MLG) stacks with Bernal (ABA) stacking using semi-classical Monte Carlo simulations and the results are compared to bi-layer graphene. Both the D'yakonov-Perel and Elliot-Yafet mechanisms for spin relaxation are considered for modeling purposes. Varying the number of layers alters the band structure of the MLG. We study the effect of the band structures in determining the spin relaxation lengths of the different multilayer graphene stacks. We observe that as the number of layers increases the spin relaxation length increases up to a maximum value for 16 layers and then stays the same irrespective of the number of layers. We explain this trend in terms of the changing band structures which affects the scattering rates of the spin carriers.
Improved cache performance in Monte Carlo transport calculations using energy banding
NASA Astrophysics Data System (ADS)
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
NASA Astrophysics Data System (ADS)
Goodman, M. S.; Denis, G.; Hall, M.; Karpovsky, A.; Wilson, R.; Gabriel, T. A.; Bishop, B. L.
1980-12-01
Monte Carlo techniques which are used to study the characteristics of a proposed electron/photon detector based on the total absorption of electromagnetic showers in liquid argon are investigated. The energy range studied was 50 MeV to 2 GeV. Results are presented on the energy and angular resolution predicted for the devices, along with the detailed predictions of the transverse and longitudinal shower distributions. Comparisons are made with other photon detectors, and possible applications are discussed.
NASA Astrophysics Data System (ADS)
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
Precise Monte Carlo Simulation of Single-Photon Detectors Mario Stipcevi1,2,*
Baranger, Harold U.
" or "single-photon counters," are widely used in the laboratory research environment and medical equipment. Gauthier1 1 Duke University, Department of Physics, Box 90305, Durham, North Carolina 27708, USA 2 On leave
A Monte Carlo tool for combined photon and proton treatment planning verification
NASA Astrophysics Data System (ADS)
Seco, J.; Jiang, H.; Herrup, D.; Kooy, H.; Paganetti, H.
2007-06-01
Photons and protons are usually used independently to treat cancer. However, at MGH patients can be treated with both photons and protons since both modalities are available on site. A combined therapy can be advantageous in cancer therapy due to the skin sparing ability of photons and the sharp Bragg peak fall-off for protons beyond the tumor. In the present work, we demonstrate how to implement a combined 3D MC toolkit for photon and proton (ph-pr) therapy, which can be used for verification of the treatment plan. The commissioning of a MC system for combined ph-pr involves initially the development of a MC model of both the photon and proton treatment heads. The MC dose tool was evaluated on a head and neck patient treated with both combined photon and proton beams. The combined ph-pr dose agreed with measurements in solid water phantom to within 3%mm. Comparison with commercial planning system pencil beam prediction agrees within 3% (except for air cavities and bone regions).
Unified single-photon and single-electron counting statistics: From cavity QED to electron transport
Lambert, Neill; Chen, Yueh-Nan; Nori, Franco
2010-12-15
A key ingredient of cavity QED is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultrashort laser pulses allows one to use such a system as a source of single photons--a vital ingredient in quantum information and optical computing schemes. Here we analyze and time-adjust the photon-counting statistics of such a single-photon source and show that the photon statistics can be described by a simple transport-like nonequilibrium model. We then show that there is a one-to-one correspondence of this model to that of nonequilibrium transport of electrons through a double quantum dot nanostructure, unifying the fields of photon-counting statistics and electron-transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron-transport systems (e.g., super-Poissonian shot noise and an extension of the Leggett-Garg inequality) to single-photon-source experiments.
NASA Astrophysics Data System (ADS)
Shi, X.; Ye, M.; Curtis, G. P.; Lu, D.; Meyer, P. D.; Yabusaki, S.; Wu, J.
2011-12-01
Assessment of parametric uncertainty for groundwater reactive transport models is challenging, because the models are highly nonlinear with respect to their parameters due to nonlinear reaction equations and process coupling. The nonlinearity may yield parameter distributions that are non-Gaussian and have multiple modes. For such parameter distributions, the widely used nonlinear regression methods may not be able to accurately quantify predictive uncertainty. One solution to this problem is to use Markov Chain Monte Carlo (MCMC) techniques. Both the nonlinear regression and MCMC methods are used in this study for quantification of parametric uncertainty of a surface complexation model (SCM), developed to simulate hexavalent uranium [U(VI)] transport in column experiments. Firstly, a brute force Monte Carlo (MC) simulation with hundreds of thousands of model executions is conducted to understand the surface of objective function and predictive uncertainty of uranium concentration. Subsequently, the Gauss-Marquardt-Levenberg method is applied to calibrate the model. It shows that, even with multiple initial guesses, the local optimization method has difficulty of finding the global optimum because of the rough surface of the objective function and local optima/minima due to model nonlinearity. Another problem of the nonlinear regression is the underestimation of predictive uncertainty, as both the linear and nonlinear confidence intervals are narrower than that obtained from the native MC simulation. Since the naïve MC simulation is computationally expensive, the above challenges for parameter estimation and predictive uncertainty analysis are addressed using a computationally efficient MCMC technique, the DiffeRential Evolution Adaptive Metropolis algorithm (DREAM) algorithm. The results obtained from running DREAM compared with those from brute force Monte Carlo simulations shown that MCMC not only successfully infers the multi-modals posterior probability distribution, but also can provide good estimates of predictive uncertainty. The reason for the poor performance of the nonlinear regression methods is that Gaussian marginal distributions assumed in the nonlinear regression deviate significantly from the marginal posterior probability distributions estimated by DREAM and the brute force MC simulations.
Single photon transport along a one-dimensional waveguide with a side manipulated cavity QED system.
Yan, Cong-Hua; Wei, Lian-Fu
2015-04-20
An external mirror coupling to a cavity with a two-level atom inside is put forward to control the photon transport along a one-dimensional waveguide. Using a full quantum theory of photon transport in real space, it is shown that the Rabi splittings of the photonic transmission spectra can be controlled by the cavity-mirror couplings; the splittings could still be observed even when the cavity-atom system works in the weak coupling regime, and the transmission probability of the resonant photon can be modulated from 0 to 100%. Additionally, our numerical results show that the appearance of Fano resonance is related to the strengths of the cavity-mirror coupling and the dissipations of the system. An experimental demonstration of the proposal with the current photonic crystal waveguide technique is suggested. PMID:25969078
K. S. Leuenroth; T. E. Evans; D. F. Finkenthal
1999-01-01
Several sources of atomic data are available for simulation studies in the Monte Carlo Impurity (MCI) code. Various types of atomic data are used in MCI to determine effects of forces acting on impurities and the ionization state of the impurity. Thus, atomic data has a significant impact on MCI's impurity transport simulations. Using the same DIII--D background plasma solution
Tennessee, University of
moments at room temperature (T) via Fe x-ray emission spectroscopy [6,7]. Considering these experimentsAnisotropy of Electrical Transport in Pnictide Superconductors Studied Using Monte Carlo 26 July 2012) An undoped three-orbital spin-fermion model for the Fe-based superconductors is studied
Chow, James C L; Owrangi, Amir M
2012-01-01
This study evaluated the dosimetric impact of surface dose reduction due to the loss of backscatter from the bone interface in kilovoltage (kV) X-ray radiation therapy. Monte Carlo simulation was carried out using the EGSnrc code. An inhomogeneous phantom containing a thin water layer (0.5-5 mm) on top of a bone (thickness = 1 cm) was irradiated by a clinical 105 kVp photon beam produced by a Gulmay D3225 X-ray machine. Field sizes of 2, 5, and 10 cm diameter and source-to-surface distance of 20 cm were used. Surface doses for different phantom configurations were calculated using the DOSXYZnrc code. Photon energy spectra at the phantom surface and bone were determined according to the phase-space files at the particle scoring planes which included the multiple crossers. For comparison, all Monte Carlo simulations were repeated in a phantom with the bone replaced by water. Surface dose reduction was found when a bone was underneath the water layer. When the water thickness was equal to 1 mm for the circular field of 5 cm diameter, a surface dose reduction of 6.3% was found. The dose reduction decreased to 4.7% and 3.4% when the water thickness increased to 3 and 5 mm, respectively. This shows that the impact of the surface dose uncertainty decreased while the water thickness over the bone increased. This result was supported by the decrease in relative intensity of the lower energy photons in the energy spectrum when the water layer was with and over the bone, compared to without the bone. We concluded that surface dose reduction of 7.8%-1.1% was found when the water thickness increased from 0.5-5 mm for circular fields with diameters ranging from 2-10 cm. This decrease of surface dose results in an overestimation of prescribed dose at the patient's surface, and might be a concern when using kV photon beam to treat skin tumors in sites such as forehead, chest wall, and kneecap. PMID:22955657
Hui, Y.Y.; Chang, Y.-R.; Lee, H.-Y.; Chang, H.-C. [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Lim, T.-S. [Department of Physics, Tunghai University, Taichung 407, Taiwan (China); Fann Wunshain [Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan (China); Department of Physics, National Taiwan University, Taipei 106, Taiwan (China)
2009-01-05
The number of negatively charged nitrogen-vacancy centers (N-V){sup -} in fluorescent nanodiamond (FND) has been determined by photon correlation spectroscopy and Monte Carlo simulations at the single particle level. By taking account of the random dipole orientation of the multiple (N-V){sup -} fluorophores and simulating the probability distribution of their effective numbers (N{sub e}), we found that the actual number (N{sub a}) of the fluorophores is in linear correlation with N{sub e}, with correction factors of 1.8 and 1.2 in measurements using linearly and circularly polarized lights, respectively. We determined N{sub a}=8{+-}1 for 28 nm FND particles prepared by 3 MeV proton irradiation.
Lorence, L.J. Jr.
1991-01-01
CEPXS/ONELD is the only discrete ordinates code capable of modelling the fully-coupled electron-photon cascade at high energies. Quantities that are related to the particle flux such as dose and charge deposition can readily be obtained. This deterministic code is much faster than comparable Monte Carlo codes. The unique adjoint transport capability of CEPXS/ONELD also enables response functions to be readily calculated. Version 2.0 of the CEPXS/ONELD code package has been designed to allow users who are not expert in discrete ordinates methods to fully exploit the code's capabilities. 14 refs., 15 figs.
Shi, C. Y.; Xu, X. George; Stabin, Michael G.
2008-07-15
Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Mei, S. Knezevic, I.; Maurer, L. N.; Aksamija, Z.
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Ulmer, W; Pyyry, J; Kaissl, W
2005-04-21
Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes < or =5.5 cm(2) and densities < or = 0.25 g cm(-3), in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. PMID:15815095
Brantley, P S
2009-06-30
Particle transport through binary stochastic mixtures has received considerable research attention in the last two decades. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that should be more accurate as a result of improved local material realization modeling. Zimmerman and Adams numerically confirmed these aspects of the Monte Carlo algorithms by comparing the reflection and transmission values computed using these algorithms to a standard suite of planar geometry binary stochastic mixture benchmark transport solutions. The benchmark transport problems are driven by an isotropic angular flux incident on one boundary of a binary Markovian statistical planar geometry medium. In a recent paper, we extended the benchmark comparisons of these Monte Carlo algorithms to include the scalar flux distributions produced. This comparison is important, because as demonstrated, an approximate model that gives accurate reflection and transmission probabilities can produce unphysical scalar flux distributions. Brantley and Palmer recently investigated the accuracy of the Levermore-Pomraning model using a new interior source binary stochastic medium benchmark problem suite. In this paper, we further investigate the accuracy of the Monte Carlo algorithms proposed by Zimmerman and Adams by comparing to the benchmark results from the interior source binary stochastic medium benchmark suite, including scalar flux distributions. Because the interior source scalar flux distributions are of an inherently different character than the distributions obtained for the incident angular flux benchmark problems, the present benchmark comparison extends the domain of problems for which the accuracy of these Monte Carlo algorithms has been investigated.
Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Nanophotonic Waveguide
Zeyang Liao; Xiaodong Zeng; Shi-Yao Zhu; M. Suhail Zubairy
2015-05-25
We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach $100\\%$. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physical phenomena can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single photon frequency filter, single photon modulation and may find important applications in quantum information.
Single Photon Transport through an Atomic Chain Coupled to a One-dimensional Nanophotonic Waveguide
Zeyang Liao; Xiaodong Zeng; Shi-Yao Zhu; M. Suhail Zubairy
2015-10-07
We study the dynamics of a single photon pulse travels through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single photon pulse with finite bandwidth can even approach $100\\%$. The spectrum of the reflected and transmitted photon can also be significantly different from the single atom case. Many interesting physical phenomena can occur in this system such as the photonic bandgap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single photon frequency filter, single photon modulation and may find important applications in quantum information.
Single-photon transport through an atomic chain coupled to a one-dimensional nanophotonic waveguide
NASA Astrophysics Data System (ADS)
Liao, Zeyang; Zeng, Xiaodong; Zhu, Shi-Yao; Zubairy, M. Suhail
2015-08-01
We study the dynamics of a single-photon pulse traveling through a linear atomic chain coupled to a one-dimensional (1D) single mode photonic waveguide. We derive a time-dependent dynamical theory for this collective many-body system which allows us to study the real time evolution of the photon transport and the atomic excitations. Our analytical result is consistent with previous numerical calculations when there is only one atom. For an atomic chain, the collective interaction between the atoms mediated by the waveguide mode can significantly change the dynamics of the system. The reflectivity of a photon can be tuned by changing the ratio of coupling strength and the photon linewidth or by changing the number of atoms in the chain. The reflectivity of a single-photon pulse with finite bandwidth can even approach 100 % . The spectrum of the reflected and transmitted photon can also be significantly different from the single-atom case. Many interesting physical phenomena can occur in this system such as the photonic band-gap effects, quantum entanglement generation, Fano-like interference, and superradiant effects. For engineering, this system may serve as a single-photon frequency filter, single-photon modulation, and may find important applications in quantum information.
Ali, Fawaz; Waller, Ed
2014-10-01
There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations. PMID:25162421
NASA Astrophysics Data System (ADS)
Romano, Paul Kollath
Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)
Hartmann Siantar, C L; Walling, R S; Daly, T P; Faddegon, B; Albright, N; Bergstrom, P; Bielajew, A F; Chuang, C; Garrett, D; House, R K; Knapp, D; Wieczorek, D J; Verhey, L J
2001-07-01
PEREGRINE is a three-dimensional Monte Carlo dose calculation system written specifically for radiotherapy. This paper describes the implementation and overall dosimetric accuracy of PEREGRINE physics algorithms, beam model, and beam commissioning procedure. Particle-interaction data, tracking geometries, scoring, variance reduction, and statistical analysis are described. The BEAM code system is used to model the treatment-independent accelerator head, resulting in the identification of primary and scattered photon sources and an electron contaminant source. The magnitude of the electron source is increased to improve agreement with measurements in the buildup region in the largest fields. Published measurements provide an estimate of backscatter on monitor chamber response. Commissioning consists of selecting the electron beam energy, determining the scale factor that defines dose per monitor unit, and describing treatment-dependent beam modifiers. We compare calculations with measurements in a water phantom for open fields, wedges, blocks, and a multileaf collimator for 6 and 18 MV Varian Clinac 2100C photon beams. All calculations are reported as dose per monitor unit. Aside from backscatter estimates, no additional, field-specific normalization is included in comparisons with measurements. Maximum discrepancies were less than either 2% of the maximum dose or 1.2 mm in isodose position for all field sizes and beam modifiers. PMID:11488562
Chibani, Omar; Ma, C M Charlie
2007-04-01
Significant discrepancies between Monte Carlo dose calculations and measurements for the Varian 18 MV photon beam with a large field size (40 x 40 cm2) were reported by different investigators. In this work, we investigated these discrepancies based on a new geometry model ("New Model") of the Varian 21EX linac using the GEPTS Monte Carlo code. Some geometric parameters used in previous investigations (Old Model) were inaccurate, as suggested by Chibani in his AAPM presentation (2004) and later confirmed by the manufacturer. The entrance and exit radii of the primary collimator of the New Model are 2 mm larger than previously thought. In addition to the corrected dimensions of the primary collimator, the New Model includes approximate models for the lead shield and the mirror frame between the monitor chamber and the Y jaws. A detailed analysis of the phase space data shows the effects of these corrections on the beam characteristics. The individual contributions from the linac component to the photon and electron fluences are calculated. The main source of discrepancy between measurements and calculations based on the Old Model is the underestimated electron contamination. The photon and electron fluences at the isocenter are 5.3% and 36% larger in the New Model in comparison with the Old Model. The flattening filter and the lead shield (plus the mirror frame) contribute 48.7% and 13% of the total electron contamination at the isocenter, respectively. For both open and filtered (2 mm Pb) fields, the calculated (New Model) and measured dose distributions are within 1% for depths larger than 1 cm. To solve the residual problem of large differences at shallow depths (8% at 0.25 cm depth), the detailed geometry of an IC-10 ionization chamber was simulated and the dose in the air cavity was calculated for different positions on the central axis including at the surface, where half of the chamber is outside the phantom. The calculated and measured chamber responses are within 3% even at the zero depth. PMID:17500452
Araki, Fujio
2012-11-21
The purpose of this study was to investigate the perturbation correction factors and inhomogeneity correction factors (ICFs) for a thin-walled cylindrical ion chamber in a heterogeneous phantom including solid water, lung and bone plastic materials. The perturbation factors due to the replacement of the air cavity, non-water equivalence of the wall and the stem, non-air equivalence of the central electrode and the overall perturbation factor, P(Q), for a cylindrical chamber, in the heterogeneous phantom were calculated with the EGSnrc/Cavity Monte Carlo code for 6 and 15 MV photon beams. The PTW31010 (0.125 cm(3)) chamber was modeled with Monte Carlo simulations, and was used for measurements and calculations of percentage depth ionization (PDI) or percentage depth dose (PDD). ICFs were calculated from the ratio of the product of the stopping power ratios (SPRs) and P(Q) of lung or bone to solid water. Finally, the measured PDIs were converted to PDDs by using ICFs and were compared with those calculated by the Monte Carlo method. The perturbation effect for the ion chamber in lung material is insignificant at 5 × 5 and 10 × 10 cm(2) fields, but the effect needs to be considered under conditions of lateral electron disequilibrium with a 3 × 3 cm(2) field. ICFs in lung varied up to 2% and 4% depending on the field size for 6 and 15 MV, respectively. For bone material, the perturbation effects due to the chamber wall and the stem were more significant at up to 3.5% and 1.6% for 6 MV, respectively. ICFs for bone material were approximately 0.945 and 0.940 for 6 and 15 MV, respectively. The converted PDDs by using ICFs were in good agreement with Monte Carlo calculated PDDs. The chamber perturbation correction and SPRs should strictly be considered for ion chamber dosimetry in heterogeneous media. This is more important for small field dosimetry in lung and bone materials. PMID:23103477
Bakhshabadi, Mahdi; Ghorbani, Mahdi; Meigooni, Ali Soleimani
2013-09-01
In the present study, a number of brachytherapy sources and activation media were simulated using MCNPX code and the results were analyzed based on the dose enhancement factor values. Furthermore, two new brachytherapy sources (¹³¹Cs and a hypothetical ¹??Tm) were evaluated for their application in photon activation therapy (PAT). ¹²?I, ¹?³Pd, ¹³¹Cs and hypothetical ¹??Tm brachytherapy sources were simulated in water and their dose rate constant and the radial dose functions were compared with previously published data. The sources were then simulated in a soft tissue phantom which was composed of Ag, I, Pt or Au as activation media uniformly distributed in the tumour volume. These simulations were performed using the MCNPX code, and dose enhancement factor (DEF) was obtained for 7, 18 and 30 mg/ml concentrations of the activation media. Each source, activation medium and concentration was evaluated separately in a separate simulation. The calculated dose rate constant and radial dose functions were in agreement with the published data for the aforementioned sources. The maximum DEF was found to be 5.58 for a combination of the ¹??Tm source with 30 mg/ml concentration of I. The DEFs for ¹³¹Cs and ¹??Tm sources for all the four activation media were higher than those for other sources and activation media. From this point of view, these two sources can be more useful in photon activation therapy with photon emitter sources. Furthermore, ¹³¹Cs and ¹??Tm brachytherapy sources can be proposed as new options for use in the field of PAT. PMID:23934379
Jet transport and photon bremsstrahlung via longitudinal and transverse scattering
NASA Astrophysics Data System (ADS)
Qin, Guang-You; Majumder, Abhijit
2015-04-01
We study the effect of multiple scatterings on the propagation of hard partons and the production of jet-bremsstrahlung photons inside a dense medium in the framework of deep-inelastic scattering off a large nucleus. We include the momentum exchanges in both longitudinal and transverse directions between the hard partons and the constituents of the medium. Keeping up to the second order in a momentum gradient expansion, we derive the spectrum for the photon emission from a hard quark jet when traversing dense nuclear matter. Our calculation demonstrates that the photon bremsstrahlung process is influenced not only by the transverse momentum diffusion of the propagating hard parton, but also by the longitudinal drag and diffusion of the parton momentum. A notable outcome is that the longitudinal drag tends to reduce the amount of stimulated emission from the hard parton.
Perturbative and iterative methods for photon transport in one-dimensional waveguides
NASA Astrophysics Data System (ADS)
Obi, Kenechukwu C.; Shen, Jung-Tsung
2015-05-01
The problems of photon transport in one-dimensional waveguides have recently attracted great attentions. We consider the case of single photons scattering off a ?-type three-level quantum emitter, and discuss the perturbative treatments of the scattering processes in terms of Born approximation for the Lippmann-Schwinger formalism. We show that the iterative Born series of the scattering amplitudes converge to the exact results obtained by other approaches. The generalization of our work provides a foundational basis for efficient computational schemes for photon scattering problems in one-dimensional waveguides.
Vinke, Ruud; Olcott, Peter D; Cates, Joshua W; Levin, Craig S
2014-10-21
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 mm × 3 mm × 20 mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time. PMID:25255807
NASA Astrophysics Data System (ADS)
Vinke, Ruud; Olcott, Peter D.; Cates, Joshua W.; Levin, Craig S.
2014-10-01
In this work, a method is presented that can calculate the lower bound of the timing resolution for large scintillation crystals with non-negligible photon transport. Hereby, the timing resolution bound can directly be calculated from Monte Carlo generated arrival times of the scintillation photons. This method extends timing resolution bound calculations based on analytical equations, as crystal geometries can be evaluated that do not have closed form solutions of arrival time distributions. The timing resolution bounds are calculated for an exemplary 3 mm × 3 mm × 20 mm LYSO crystal geometry, with scintillation centers exponentially spread along the crystal length as well as with scintillation centers at fixed distances from the photosensor. Pulse shape simulations further show that analog photosensors intrinsically operate near the timing resolution bound, which can be attributed to the finite single photoelectron pulse rise time.
NASA Astrophysics Data System (ADS)
Majaron, Boris; Milani?, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.
Monte Carlo model of neutral-particle transport in diverted plasmas
Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.
1981-11-01
The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.
penMesh--Monte Carlo radiation transport simulation in a triangle mesh geometry.
Badal, Andreu; Kyprianou, Iacovos; Banh, Diem Phuc; Badano, Aldo; Sempau, Josep
2009-12-01
We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh. PMID:19435677
Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
Core-scale solute transport model selection using Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.
2013-06-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.
Kinetic Monte Carlo (KMC) simulation of fission product silver transport through TRISO fuel particle
NASA Astrophysics Data System (ADS)
de Bellefon, G. M.; Wirth, B. D.
2011-06-01
A mesoscale kinetic Monte Carlo (KMC) model developed to investigate the diffusion of silver through the pyrolytic carbon and silicon carbide containment layers of a TRISO fuel particle is described. The release of radioactive silver from TRISO particles has been studied for nearly three decades, yet the mechanisms governing silver transport are not fully understood. This model atomically resolves Ag, but provides a mesoscale medium of carbon and silicon carbide, which can include a variety of defects including grain boundaries, reflective interfaces, cracks, and radiation-induced cavities that can either accelerate silver diffusion or slow diffusion by acting as traps for silver. The key input parameters to the model (diffusion coefficients, trap binding energies, interface characteristics) are determined from available experimental data, or parametrically varied, until more precise values become available from lower length scale modeling or experiment. The predicted results, in terms of the time/temperature dependence of silver release during post-irradiation annealing and the variability of silver release from particle to particle have been compared to available experimental data from the German HTR Fuel Program ( Gontard and Nabielek [1]) and Minato and co-workers ( Minato et al. [2]).
Majaron, Boris; Milani?, Matija; Premru, Jan
2015-01-01
In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries. PMID:25604544
Budanec, M; Knezevi?, Z; Bokuli?, T; Mrcela, I; Vrtar, M; Veki?, B; Kusi?, Z
2008-12-01
This work studied the percent depth doses of (60)Co photon beams in the buildup region of a plastic phantom by LiF TLD measurements and by Monte Carlo calculations. An agreement within +/-1.5% was found between PDDs measured by TLD and calculated by the Monte Carlo method with the TLD in a plastic phantom. The dose in the plastic phantom was scored in voxels, with thickness scaled by physical and electron density. PDDs calculated by electron density scaling showed a better match with PDD(TLD)(MC); the difference is within +/-1.5% in the buildup region for square and rectangular field sizes. PMID:18541436
Massimo V. Fischetti; Steven E. Laux
1988-01-01
The physics of electron transport in Si and GaAs is investigated with use of a Monte Carlo technique which improves the ``state-of-the-art'' treatment of high-energy carrier dynamics. (1) The semiconductor is modeled beyond the effective-mass approximation by using the band structure obtained from empirical-pseudopotential calculations. (2) The electron-phonon, electron-impurity, and electron-electron scattering rates are computed in a way consistent with
Enhancing quantum transport in a photonic network using controllable decoherence
Devon N. Biggerstaff; René Heilmann; Aidan A. Zecevik; Markus Gräfe; Matthew A. Broome; Alessandro Fedrizzi; Stefan Nolte; Alexander Szameit; Andrew G. White; Ivan Kassal
2015-04-23
Transport phenomena on a quantum scale appear in a variety of systems, ranging from photosynthetic complexes to engineered quantum devices. It has been predicted that the efficiency of quantum transport can be enhanced through dynamic interaction between the system and a noisy environment. We report the first experimental demonstration of such environment-assisted quantum transport, using an engineered network of laser-written waveguides, with relative energies and inter-waveguide couplings tailored to yield the desired Hamiltonian. Controllable decoherence is simulated via broadening the bandwidth of the input illumination, yielding a significant increase in transport efficiency relative to the narrowband case. We show integrated optics to be suitable for simulating specific target Hamiltonians as well as open quantum systems with controllable loss and decoherence.
Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
Bush, K.; Gagne, I. M.; Zavgorodni, S.; Ansbacher, W.; Beckham, W. [Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Center, Victoria, British Columbia V8R 6V5 (Canada)
2011-04-15
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm{sup 2}) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within {+-}1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within {+-}2.0%({sigma}{sub MC}=0.8%) in lung ({rho}=0.24 g cm{sup -3}) and within {+-}2.9%({sigma}{sub MC}=0.8%) in low-density lung ({rho}=0.1 g cm{sup -3}). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity ({rho}=0.001 g cm{sup -3}) were found to be within the range of {+-}1.5% to {+-}4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark ({sigma}{sub MC}=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within {+-}2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is capable of modeling radiotherapy dose deposition with accuracy only previously achievable with Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Wang, Zi-Qing; Wang, Guo-Dong; Shen, Wei-Bo
2010-10-01
Multimotor transport is studied by Monte-Carlo simulation with consideration of motor detachment from the filament. Our work shows, in the case of low load, the velocity of multi-motor system can decrease or increase with increasing motor numbers depending on the single motor force-velocity curve. The stall force and run-length reduced greatly compared to other models. Especially in the case of low ATP concentrations, the stall force of multi motor transport even smaller than the single motor's stall force.
NASA Astrophysics Data System (ADS)
Tubiana, Jerome; Kass, Alex J.; Newman, Maya Y.; Levitz, David
2015-07-01
Detecting pre-cancer in epithelial tissues such as the cervix is a challenging task in low-resources settings. In an effort to achieve low cost cervical cancer screening and diagnostic method for use in low resource settings, mobile colposcopes that use a smartphone as their engine have been developed. Designing image analysis software suited for this task requires proper modeling of light propagation from the abnormalities inside tissues to the camera of the smartphones. Different simulation methods have been developed in the past, by solving light diffusion equations, or running Monte Carlo simulations. Several algorithms exist for the latter, including MCML and the recently developed MCX. For imaging purpose, the observable parameter of interest is the reflectance profile of a tissue under some specific pattern of illumination and optical setup. Extensions of the MCX algorithm to simulate this observable under these conditions were developed. These extensions were validated against MCML and diffusion theory for the simple case of contact measurements, and reflectance profiles under colposcopy imaging geometry were also simulated. To validate this model, the diffuse reflectance profiles of tissue phantoms were measured with a spectrometer under several illumination and optical settings for various homogeneous tissues phantoms. The measured reflectance profiles showed a non-trivial deviation across the spectrum. Measurements of an added absorber experiment on a series of phantoms showed that absorption of dye scales linearly when fit to both MCX and diffusion models. More work is needed to integrate a pupil into the experiment.
Landon, Colin Donald
2014-01-01
We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are ...
Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
The role of plasma evolution and photon transport in optimizing future advanced lithography sources
Harilal, S. S.
The role of plasma evolution and photon transport in optimizing future advanced lithography sources-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small, thermal conduction in material and in plasma, vaporization, hydrodynamic evolution of target vapor
Few-Photon Transport in Nonlinear Cavity Arrays: Probing Signatures of Strongly Correlated States
Changhyoup Lee; Changsuk Noh; Nikolaos Schetakis; Dimitris G. Angelakis
2014-12-29
We consider the quantum transport of multi-photon Fock states in a one-dimensional coupled QED cavity array (CQCA) as a scheme to probe the underlying strongly correlated (many-body) states of the system. To this end, we employ the Lippmann-Schwinger formalism within which an analytical form of the scattering matrix can be found. The latter is evaluated explicitly for the two photon-two site case, from which we calculate the scattering probabilities and auto-correlation functions. The results indicate that the underlying many body structure of the CQCAs can be directly inferred from the probe spectrum and output photon statistics. Our method goes beyond the usual case of using coherent light fields to drive the system and paves the way for CQCAs to serve as a quantum simulator of aspects of quantum transport in strongly correlated bosonic systems.
Looe, Hui Khee; Harder, Dietrich; Poppe, Björn
2015-08-21
The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of 'reverse' Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector's signal are identified and associated with the detector's mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector's signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector's material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector's volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor. PMID:26267311
NASA Astrophysics Data System (ADS)
Khee Looe, Hui; Harder, Dietrich; Poppe, Björn
2015-08-01
The purpose of the present study is to understand the mechanism underlying the perturbation of the field of the secondary electrons, which occurs in the presence of a detector in water as the surrounding medium. By means of ‘reverse’ Monte Carlo simulation, the points of origin of the secondary electrons contributing to the detector’s signal are identified and associated with the detector’s mass density, electron density and atomic composition. The spatial pattern of the origin of these secondary electrons, in addition to the formation of the detector signal by components from all parts of its sensitive volume, determines the shape of the lateral dose response function, i.e. of the convolution kernel K(x,y) linking the lateral profile of the absorbed dose in the undisturbed surrounding medium with the associated profile of the detector’s signal. The shape of the convolution kernel is shown to vary essentially with the electron density of the detector’s material, and to be attributable to the relative contribution by the signal-generating secondary electrons originating within the detector’s volume to the total detector signal. Finally, the representation of the over- or underresponse of a photon detector by this density-dependent convolution kernel will be applied to provide a new analytical expression for the associated volume effect correction factor.
Wang, Lilie L. W.; Beddar, Sam [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2011-03-15
Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.
Chow, James C L; Jiang, Runqing
2012-06-21
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle(3) treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (d(max)) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose. PMID:22642985
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Jiang, Runqing
2012-06-01
This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.
Monte Carlo simulations of electron transport in coupled Si quantum wells
NASA Astrophysics Data System (ADS)
Crow, G. C.; Abram, R. A.
1999-12-01
Coupled wide and narrow tensile strained Si X2-valley quantum wells could provide the basis for an SiGe transistor operating in a velocity modulation mode. Ideally, charge would be rapidly switched between high- (wide) and low- (narrow) mobility channels under the action of an applied gate bias, with little or no modulation of the total channel charge density. With this application in mind, the Monte Carlo technique has been used to simulate in-plane electron transport for a back-doped Si/Si0.85Ge0.15 double-well structure. The equilibrium band profile from the Schottky or oxide top gate to the SiGe virtual substrate is such that electrons are confined to the narrow well when the gate is unbiased. The calculated in-plane mobility is sensitive to the transverse field applied across the quantum well structure - the maximum:minimum mobility ratio is 13:1 at 77 K. At 77 K, a lower mobility in the narrow channel (4000 cm2 V s-1) is mainly due to surface roughness and Coulomb scattering by supply layer impurities. The simulations also predict that mobility modulation would be far less effective at 300 K; scattering by acoustic and inelastic g phonons (LO, TA, LA modes), and consequent distribution across the available subbands (icons/Journals/Common/le" ALT="le" ALIGN="TOP"/>9) of the entire well structure reduces confinement to either the wide or narrow well, and hence the predicted maximum:minimum mobility ratio is less than 2:1.
Update on the Status of the FLUKA Monte Carlo Transport Code
NASA Technical Reports Server (NTRS)
Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.
2004-01-01
The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.
Suppression of population transport and control of exciton distributions by entangled photons
Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul
2013-01-01
Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty. PMID:23653194
Direct photon emission in Heavy Ion Collisions from Microscopic Transport Theory and Fluid Dynamics
Bjoern Baeuchle; Marcus Bleicher
2010-03-29
Direct photon emission in heavy-ion collisions is calculated within a relativistic micro+macro hybrid model and compared to the microscopic transport model UrQMD. In the hybrid approach, the high-density part of the collision is calculated by an ideal 3+1-dimensional hydrodynamic calculation, while the early (pre-equilibrium-) and late (rescattering-) phase are calculated with the transport model. Different scenarios of the transition from the macroscopic description to the transport model description and their effects are studied. The calculations are compared to measurements by the WA98-collaboration and predictions for the future CBM-experiment are made.
Tian, Zhen; Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2014-11-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved. PMID:25295381
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.
Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)
2013-07-01
A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)
NASA Astrophysics Data System (ADS)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP ha
Franke, B. C.; Prinja, A. K.
2013-07-01
The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)
NASA Astrophysics Data System (ADS)
Leuenroth, K. S.; Evans, T. E.; Finkenthal, D. F.
1999-11-01
Several sources of atomic data are available for simulation studies in the Monte Carlo Impurity (MCI) code. Various types of atomic data are used in MCI to determine effects of forces acting on impurities and the ionization state of the impurity. Thus, atomic data has a significant impact on MCI's impurity transport simulations. Using the same DIII--D background plasma solution and flux surface geometry, we compare results from MCI's carbon impurity simulations using first ADPAK and then ADAS atomic data. This allows us to examine regions where significant transport differences occur in order to better understand which transport mechanisms are most sensitive to differences in these two data sources. Finally, by comparing carbon impurity distributions resulting from each atomic data source with experimental measurements in the DIII--D divertor, we can assess which atomic data gives a better match to the experimental results.
NASA Astrophysics Data System (ADS)
Matheson, Bruce; Hueser, Joseph
2008-08-01
The Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC), as released by NASA, is a general purpose, gas dynamic, transport analysis suite of codes. These codes have been acquired by Ball Aerospace & Technologies Corp. (BATC) through a software usage agreement and have been modified to do a more detailed analysis of contaminant molecular transport of spacecraft and spacecraft instruments over mission lifetimes, typically 5 to 7 years. This transport model takes advantage of the proven algorithms within DAC to handle complex surface geometries and time-dependant gas dynamics. Additions to the code include diffusion of contaminants through solid surfaces, temperature and coverage-dependant adsorption/desorption for the contaminants being modeled, and input data for molecular diameters, molecular weights, and diffusion parameters for the common contaminants found in spacecraft materials and coatings.
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-21
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions. PMID:19934489
Pasciak, Alexander Samuel
2007-04-25
Advancements in parallel and cluster computing have made many complex Monte Carlo simulations possible in the past several years. Unfortunately, cluster computers are large, expensive, and still not fast enough to make the ...
Adaptive {delta}f Monte Carlo Method for Simulation of RF-heating and Transport in Fusion Plasmas
Hoeoek, J.; Hellsten, T.
2009-11-26
Essential for modeling heating and transport of fusion plasma is determining the distribution function of the plasma species. Characteristic for RF-heating is creation of particle distributions with a high energy tail. In the high energy region the deviation from a Maxwellian distribution is large while in the low energy region the distribution is close to a Maxwellian due to the velocity dependency of the collision frequency. Because of geometry and orbit topology Monte Carlo methods are frequently used. To avoid simulating the thermal part, {delta}f methods are beneficial. Here we present a new {delta}f Monte Carlo method with an adaptive scheme for reducing the total variance and sources, suitable for calculating the distribution function for RF-heating.
NASA Astrophysics Data System (ADS)
Shen, Min
A method for Monte Carlo simulation of 2D spin-polarized electron transport in III-V semiconductor heterojunction Field Effect Transistors (FETs) is presented. In the simulation, the electron dynamics in coordinate and in-plane momentum space is treated semiclassically. The density matrix description of the spin is incorporated in the Monte Carlo method to account for the spin polarization dynamics. The spin-orbit interaction in the spin FET lead to both coherent evolution and dephasing of the electron spin polarization. Spin-independent scattering mechanisms, including optical phonons, acoustic phonons and ionized impurities, are implemented in the simulation. These scattering processes produce spin dephasing through the spin-orbit interaction. The electric field is determined self-consistently from the charge distribution resulting from the electron motion. Utilizing the above model, we study the in-plane transport of spin-polarized electrons in III-V semiconductor quantum wells (QWs). Monte Carlo simulations have been carried out for temperatures in the range 77--300 K. The above model is also applied to study spin-polarized transport properties of 2D electron gas in semiconductor spin-FET structure. The specific symmetry of spin-orbit terms (Rashba and Dresselhaus) leads to strong anisotropy of spin dynamics in the low field regime. Coherent spin evolution and spin dephasing are investigated for different orientations of the device channel related to the crystallographic axes. Efforts have been made to suppress spin dephasing while conserving coherent oscillation of spin polarization required for spin-FET design. Results derived from this study provide useful information to assist in optimization of the spin-FET performance. As an important step forward, the Monte Carlo model developed above is further extended to incorporate the spin injection through the Schottky barrier which is a very important topic in spintronics. In this extended model, the spin polarization and the energy distribution function of the injection rate for electrons are determined by the thermionic emission and tunneling process. With the extended Monte Carlo model, effect of the width of a step-doping region near the Schottky contact on the spin injection is investigated. Useful information is provided for the barrier design to achieve efficient spin injection and high spin-polarized current in spintronic devices.
Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney
2014-06-01
Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Hill, Jeffrey E.
affect the immune system of the fish, increase susceptibility to disease, and may lead to illnessFA-119 On-Farm Transport of Ornamental Fish 1 Tina C. Crosby, Jeffrey E. Hill, Carlos V. Martinez and transport of fish will affect survival and overall quality of the fish (see UF IFAS Circular 919 Stress
Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
NASA Astrophysics Data System (ADS)
Souza, Fabrício M.; Carrara, Thiago L.; Vernek, E.
2011-09-01
We investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using nonequilibrium Green’s function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect.
Photon Induced Transport in Graphene-Boron Nitride-Graphene Heterostructures
NASA Astrophysics Data System (ADS)
Nair, Nityan; Gabor, Nathaniel; Ma, Qiong; Watanabe, Kenji; Taniguchi, Takashi; Fang, Wenjing; Kong, Jing; Jarillo-Herrero, Pablo
2013-03-01
Monolayer graphene, an atomically thin sheet of hexagonally oriented carbon, is a zero band gap conductor that exhibits strong electron-electron interactions and broadband optical absorption. By combining MLG and hexagonal boron nitride into ultrathin vertical stacks, experiments have demonstrated improved mobility, Coulomb drag, and field-effect tunneling across few-layer boron nitride barriers. Here, we report on the photon-induced transport of charge carriers through a graphene-boron nitride-graphene heterostructure. The dependence of the generated photocurrent on photon energy and interlayer bias voltage is studied. The photocurrent is found to depend strongly on both these parameters, showing several interesting features. We consider several processes that may serve to explain the rich dependence of photoconductance on applied bias voltage and photon energy.
Hughes, S
2011-01-01
The input/output characteristics of coherent photon transport through a semiconductor cavity system containing a single quantum dot is presented. The nonlinear quantum optics formalism uses a master equation approach and focuses on a waveguide-cavity system containing a semiconductor quantum dot; our general technique also applies to studying coherent reflection from a micropillar cavity. We investigate the effects of light propagation and show the need for quantized multiphoton effects for various dot-cavity systems, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down---even in the weak coupling regime. As a measure of the photon correlations, we compute the Fano factor and the error associated with making a semiclassical approximation. We also investigate the role of electron--acoustic-phonon scattering and show that phonon-mediated scatt...
Chow, J; Owrangi, A
2014-06-01
Purpose: This study compared the dependence of depth dose on bone heterogeneity of unflattened photon beams to that of flattened beams. Monte Carlo simulations (the EGSnrc-based codes) were used to calculate depth doses in phantom with a bone layer in the buildup region of the 6 MV photon beams. Methods: Heterogeneous phantom containing a bone layer of 2 cm thick at a depth of 1 cm in water was irradiated by the unflattened and flattened 6 MV photon beams (field size = 10×10 cm{sup 2}). Phase-space files of the photon beams based on the Varian TrueBeam linac were generated by the Geant4 and BEAMnrc codes, and verified by measurements. Depth doses were calculated using the DOSXYZnrc code with beam angles set to 0° and 30°. For dosimetric comparison, the above simulations were repeated in a water phantom using the same beam geometry with the bone layer replaced by water. Results: Our results showed that the beam output of unflattened photon beams was about 2.1 times larger than the flattened beams in water. Comparing the water phantom to the bone phantom, larger doses were found in water above and below the bone layer for both the unflattened and flattened photon beams. When both beams were turned 30°, the deviation of depth dose between the bone and water phantom became larger compared to that with beam angle equal to 0°. Dose ratio of the unflattened and flattened photon beams showed that the unflattened beam has larger depth dose in the buildup region compared to the flattened beam. Conclusion: Although the unflattened photon beam had different beam output and quality compared to the flattened, dose enhancements due to the bone scatter were found similar. However, we discovered that depth dose deviation due to the presence of bone was sensitive to the beam obliquity.
Status of JAERI’s Monte Carlo Code MVP for Neutron and Photon Transport Problems
NASA Astrophysics Data System (ADS)
Mori, T.; Okumura, K.; Nagaya, Y.
The special features of MVP are (1) vectorization and parallelization, (2) multiple lattice capability and statistical geometry model, (3) probability table method for unresolved resonance, (4) calculation at arbitrary temperatures, (5) depletion calculation, (6) perturbation calculation for eigenvalue (K eff) problem, and so on.
Photon-transport properties of luminescent solar concentrators - Analysis and optimization
NASA Astrophysics Data System (ADS)
Roncali, J.; Garnier, F.
1984-08-01
The principle of a luminescent solar concentrator is analyzed with an emphasis on the photon-transport yield. A mathematical model is developed, which takes into account the loss factors related to the photon transport in the LSC matrix. The relations obtained show that whereas the optical efficiency is still a decreasing factor with the LSC size, the concentration ratio can be optimized with regard to the geometry, the input surface, and the thickness of the LSC. The experimental analysis, carried out on two types of fluorescent PMMA, confirms the effects of these geometrical parameters on the LSC performances. A concentration ratio of 22 has been obtained experimentally with monochromatic irradiation, and a flux gain of 9.5 has also been determined in real conditions.
Wang, Lu; Yorke, Ellen; Desobry, Gregory; Chui, Chen-Shou
2002-01-01
Many lung cancer patients who undergo radiation therapy are treated with higher energy photons (15-18 MV) to obtain deeper penetration and better dose uniformity. However, the longer range of the higher energy recoil electrons in the low-density medium may cause lateral electronic disequilibrium and degrade the target coverage. To compare the dose homogeneity achieved with lower versus higher energy photon beams, we performed a dosimetric study of 6 and 15 MV three-dimensional (3D) conformal treatment plans for lung cancer using an accurate, patient-specific dose-calculation method based on a Monte Carlo technique. A 6 and 15 MV 3D conformal treatment plan was generated for each of two patients with target volumes exceeding 200 cm(3) on an in-house treatment planning system in routine clinical use. Each plan employed four conformally shaped photon beams. Each dose distribution was recalculated with the Monte Carlo method, utilizing the same beam geometry and patient-specific computed tomography (CT) images. Treatment plans using the two energies were compared in terms of their isodose distributions and dose-volume histograms (DVHs). The 15 MV dose distributions and DVHs generated by the clinical treatment planning calculations were as good as, or slightly better than, those generated for 6 MV beams. However, the Monte Carlo dose calculation predicted increased penumbra width with increased photon energy resulting in decreased lateral dose homogeneity for the 15 MV plans. Monte Carlo calculations showed that all target coverage indicators were significantly worse for 15 MV than for 6 MV; particularly the portion of the planning target volume (PTV) receiving at least 95% of the prescription dose (V(95)) dropped dramatically for the 15 MV plan in comparison to the 6 MV. Spinal cord and lung doses were clinically equivalent for the two energies. In treatment planning of tumors that abut lung tissue, lower energy (6 MV) photon beams should be preferred over higher energies (15-18 MV) because of the significant loss of lateral dose equilibrium for high-energy beams in the low-density medium. Any gains in radial dose uniformity across steep density gradients for higher energy beams must be weighed carefully against the lateral beam degradation due to penumbra widening. PMID:11818004
NASA Astrophysics Data System (ADS)
Schaefer, C.; Jansen, A. P. J.
2013-02-01
We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.
Son, Sang-Kil; 10.1103/PhysRevA.85.063415
2013-01-01
When atoms and molecules are irradiated by an x-ray free-electron laser (XFEL), they are highly ionized via a sequence of one-photon ionization and relaxation processes. To describe the ionization dynamics during XFEL pulses, a rate equation model has been employed. Even though this model is straightforward for the case of light atoms, it generates a huge number of coupled rate equations for heavy atoms like xenon, which are not trivial to solve directly. Here, we employ the Monte Carlo method to address this problem and we investigate ionization dynamics of xenon atoms induced by XFEL pulses at a photon energy of 4500 eV. Charge state distributions, photo-/Auger electron spectra, and fluorescence spectra are presented for x-ray fluences of up to $10^{13}$ photons/$\\mu$m$^2$. With the photon energy of 4500 eV, xenon atoms can be ionized up to +44 through multiphoton absorption characterized by sequential one-photon single-electron interactions.
NASA Astrophysics Data System (ADS)
Son, Sang-Kil; Santra, Robin
2012-06-01
When atoms and molecules are irradiated by an x-ray free-electron laser (XFEL), they are highly ionized via a sequence of one-photon ionization and relaxation processes. To describe the ionization dynamics during XFEL pulses, a rate equation model has been employed. Even though this model is straightforward for the case of light atoms, it generates a huge number of coupled rate equations for heavy atoms like xenon, which are not trivial to solve directly. Here, we employ the Monte Carlo method to address this problem and we investigate ionization dynamics of xenon atoms induced by XFEL pulses at a photon energy of 4500 eV. Charge-state distributions, photoelectron and Auger electron spectra, and fluorescence spectra are presented for x-ray fluences of up to 1013 photons/?m2. With the photon energy of 4500 eV, xenon atoms can be ionized up to +44 through multiphoton absorption characterized by sequential one-photon single-electron interactions.
Tattersall, W J; Boyle, G J; White, R D
2015-01-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
NASA Astrophysics Data System (ADS)
Tattersall, W. J.; Cocks, D. G.; Boyle, G. J.; Buckman, S. J.; White, R. D.
2015-04-01
We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly nonequilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 381 (2002), 10.1016/S0009-2614(02)01177-6], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and we develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially varying electric fields. All of the results are found to be in excellent agreement with an independent multiterm Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M. (Oakland, CA)
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Fan Shanhui; Kocabas, Suekrue Ekin [Ginzton Laboratory, Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Shen, Jung-Tsung [Department of Electrical and Systems Engineering, Washington University, St. Louis, Missouri 63130 (United States)
2010-12-15
We extend the input-output formalism of quantum optics to analyze few-photon transport in waveguides with an embedded qubit. We provide explicit analytical derivations for one- and two-photon scattering matrix elements based on operator equations in the Heisenberg picture.
Bauer, Thilo; Jäger, Christof M; Jordan, Meredith J T; Clark, Timothy
2015-07-28
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves. PMID:26233114
NASA Astrophysics Data System (ADS)
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-07-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves.
Chow, James C.L.; Owrangi, Amir M.
2012-07-01
Dependences of mucosal dose in the oral or nasal cavity on the beam energy, beam angle, multibeam configuration, and mucosal thickness were studied for small photon fields using Monte Carlo simulations (EGSnrc-based code), which were validated by measurements. Cylindrical mucosa phantoms (mucosal thickness = 1, 2, and 3 mm) with and without the bone and air inhomogeneities were irradiated by the 6- and 18-MV photon beams (field size = 1 Multiplication-Sign 1 cm{sup 2}) with gantry angles equal to 0 Degree-Sign , 90 Degree-Sign , and 180 Degree-Sign , and multibeam configurations using 2, 4, and 8 photon beams in different orientations around the phantom. Doses along the central beam axis in the mucosal tissue were calculated. The mucosal surface doses were found to decrease slightly (1% for the 6-MV photon beam and 3% for the 18-MV beam) with an increase of mucosal thickness from 1-3 mm, when the beam angle is 0 Degree-Sign . The variation of mucosal surface dose with its thickness became insignificant when the beam angle was changed to 180 Degree-Sign , but the dose at the bone-mucosa interface was found to increase (28% for the 6-MV photon beam and 20% for the 18-MV beam) with the mucosal thickness. For different multibeam configurations, the dependence of mucosal dose on its thickness became insignificant when the number of photon beams around the mucosal tissue was increased. The mucosal dose with bone was varied with the beam energy, beam angle, multibeam configuration and mucosal thickness for a small segmental photon field. These dosimetric variations are important to consider improving the treatment strategy, so the mucosal complications in head-and-neck intensity-modulated radiation therapy can be minimized.
Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V.
2007-12-15
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.
Spatio-temporal visualization of light transport in complex photonic structures
Pattelli, Lorenzo; Burresi, Matteo; Wiersma, Diederik S
2015-01-01
Spatio-temporal imaging of light propagation is of significant importance in photonics, since it provides the most direct tool for studying the interaction between light and its host environment. Sub-ps time resolution is needed to appreciate fine and complex structural features as those occurring in disordered and heterogeneous structures, which are responsible for a rich transport physics that still needs to be fully explored. Thanks to a newly developed wide-field imaging system we present a spatio-temporal study on light transport in various disordered media, revealing properties that could not be properly assessed with standard techniques. By extending our investigation to an almost transparent membrane - a configuration which would be hardly possible to characterize to date - we unveil the peculiar physics exhibited by such thin scattering systems, with transport features that go beyond mainstream diffusion modeling despite the occurrence of multiple scattering.
Orazio Muscato; Vincenza Di Stefano
2011-01-01
Purpose – The purpose of this paper is to set up a consistent off-equilibrium thermodynamic theory to deal with the self-heating of electronic nano-devices. Design\\/methodology\\/approach – From the Bloch-Boltzmann-Peierls kinetic equations for the coupled system formed by electrons and phonons, an extended hydrodynamic model (HM) has been obtained on the basis of the maximum entropy principle. An electrothermal Monte Carlo
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
NASA Astrophysics Data System (ADS)
Bentabet, A.
The purpose of this work is to show the hidden effect of the first transport cross-section in the study of the beam electrons transport impinging in solid targets by using the Monte Carlo method. For this, our study is based on our model of differential elastic cross-section given [A. Bentabet, Z. Chaoui, A. Aydin and A. Azbouche, Vacuum 85 (2010) 156] by leaving only one free parameter adjusted, on one hand, to the elastic total cross-section and to transport cross-section, on the other hand. We think that this work is useful for those who develop semi-empirical or analytical models of elastic cross-sections. The present work deals with the total elastic cross-section, the first transport cross-section, the diffusion polar angle and the backscattering coefficient, from low energy electrons, normally incident, impinging on Al, Si, Ag and Au solid targets, where the results are compared to those of the literature and discussed.
Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads
NASA Astrophysics Data System (ADS)
Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson
2012-02-01
Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).
Araki, Fujio
2012-07-01
The purpose of this study was to calculate the replacement correction factor, P(repl) (the product P(gr)P(fl) in the AAPM's notation, or the product p(cav)p(dis) in the IAEA's notation), at a reference depth, d(ref), for cylindrical chamber cavities in clinical photon and electron beams by Monte Carlo simulation. P(repl) was calculated for cavities with a combination of various diameters and lengths. P(repl) values calculated in photon and electron beams were typically higher than those recommended by the TG-51 and TRS-398 dosimetry protocols. P(repl) values for a Farmer chamber cavity were higher by 0.3 to 0.2% and by 0.7 to 0.4%, respectively, than data of TG-51 and TRS-398, at photon energies of (60)Co to 18 MV. Similarly, the P(repl) values for electron beams were higher by 1.5 to 1.1% than data for both protocols, in a range of 6-18 MeV. The P(repl) values depended upon the cavity diameter and length, especially for lower electron energies. We found that P(repl) values of cylindrical chamber cavities for photon and electron beams were significantly different from those recommended by TG-51 and TRS-398. PMID:22528140
Yong-Chun Liu; Yun-Feng Xiao; Bei-Bei Li; Xue-Feng Jiang; Yan Li; Qihuang Gong
2011-06-08
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity--waveguide coupling system, and find that it plays a significant role in the photon transportation. On one hand, this study provides a new insight into future solid-state cavity quantum electrodynamics toward strong coupling physics. On the other hand, benefitting from this Rayleigh scattering, novel photon transportation such as dipole induced transparency and strong photon antibunching can occur simultaneously. As potential applications, this system can function as high-efficiency photon turnstiles. In contrast to [B. Dayan \\textit{et al.}, \\textrm{Science} \\textbf{319},1062 (2008)], the photon turnstiles proposed here are highly immune to nanocrystal's azimuthal position.
A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
Gentile, N A; Trahan, T J
2011-03-22
We describe a modification of the treatment of photon sources in the IMC algorithm. We describe this modified algorithm in the context of thermal emission in an infinite medium test problem at equilibrium and show that it completely eliminates statistical noise.
Cascaded two-photon spectroscopy of Yb atoms with a transportable effusive atomic beam apparatus
NASA Astrophysics Data System (ADS)
Song, Minsoo; Yoon, Tai Hyun
2013-02-01
We present a transportable effusive atomic beam apparatus for cascaded two-photon spectroscopy of the dipole-forbidden transition (6s2 1S0? 6s7s 1S0) of Yb atoms. An ohmic-heating effusive oven is designed to have a reservoir volume of 1.6 cm3 and a high degree of atomic beam collimation angle of 30 mrad. The new atomic beam apparatus allows us to detect the spontaneously cascaded two-photons from the 6s7s1S0 state via the intercombination 6s6p3P1 state with a high signal-to-noise ratio even at the temperature of 340 °C. This is made possible in our apparatus because of the enhanced atomic beam flux and superior detection solid angle.
Cascaded two-photon spectroscopy of Yb atoms with a transportable effusive atomic beam apparatus
Song, Minsoo; Yoon, Tai Hyun
2013-02-15
We present a transportable effusive atomic beam apparatus for cascaded two-photon spectroscopy of the dipole-forbidden transition (6s{sup 2} {sup 1}S{sub 0}{r_reversible} 6s7s {sup 1}S{sub 0}) of Yb atoms. An ohmic-heating effusive oven is designed to have a reservoir volume of 1.6 cm{sup 3} and a high degree of atomic beam collimation angle of 30 mrad. The new atomic beam apparatus allows us to detect the spontaneously cascaded two-photons from the 6s7s{sup 1}S{sub 0} state via the intercombination 6s6p{sup 3}P{sub 1} state with a high signal-to-noise ratio even at the temperature of 340 Degree-Sign C. This is made possible in our apparatus because of the enhanced atomic beam flux and superior detection solid angle.
Topological spin transport of photons: the optical Magnus Effect and Berry Phase
K. Yu. Bliokh; Yu. P. Bliokh
2004-02-21
The paper develops a modified geometrical optics (GO) of smoothly inhomogeneous isotropic medium, which takes into account two topological phenomena: Berry phase and the optical Magnus effect. By using the analogy between a quasi-classical motion of a quantum particle with a spin and GO of an electromagnetic wave in smoothly inhomogeneous media, we have introduced the standard gauge potential associated with the degeneracy in the wave momentum space. This potential corresponds to the Dirac-monopole-like field (Berry curvature), which causes the topological spin (polarization) transport of photons. The deviations of waves of right-hand and left-hand helicity occur in the opposite directions and orthogonally to the principal direction of motion. This produces a spin current directed across the principal motion. The situation is similar to the anomalous Hall effect for electrons. In addition, a simple scheme of the experiment allowing one to observe the topological spin splitting of photons has been suggested.
Kumar, Sudhir; Deshpande, Deepak D; Nahum, Alan E
2015-01-21
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in 'sub-equilibrium' small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for X?, the distance 'upstream' from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: X?(emp) ? 0.5R(csda)(E?(0)), where E?(0) is the mean initial secondary electron energy. These X?(emp) agree well with 'exact' MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values. PMID:25548933
NASA Astrophysics Data System (ADS)
Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.
2015-01-01
The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6–15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in ‘sub-equilibrium’ small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance ‘upstream’ from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with ‘exact’ MC-derived values for photon energies from 5–25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV–25?MeV photons in the three materials, showing close agreement with the MC-derived values.
NASA Astrophysics Data System (ADS)
Esposito, Luca; Gerace, Dario
2013-07-01
We present a perturbative approach to derive the semiclassical equations of motion for the two-dimensional electron dynamics under the simultaneous presence of static electric and magnetic fields, where the quantized Hall conductance is known to be directly related to the topological properties of translationally invariant magnetic Bloch bands. In close analogy to this approach, we develop a perturbative theory of two-dimensional photonic transport in gyrotropic photonic crystals to mimic the physics of quantum Hall systems. We show that a suitable permittivity grading of a gyrotropic photonic crystal is able to simulate the simultaneous presence of analog electric and magnetic field forces for photons, and we rigorously derive the topology-related term in the equation for the electromagnetic energy velocity that is formally equivalent to the electronic case. A possible experimental configuration is proposed to observe a bulk photonic analog to the quantum Hall physics in graded gyromagnetic photonic crystals.
Monte Carlo simulation studies of spin transport in graphene armchair nanoribbons
NASA Astrophysics Data System (ADS)
Salimath, Akshay Kumar; Ghosh, Bahniman
2014-10-01
The research in the area of spintronics is gaining momentum due to the promise spintronics based devices have shown. Since spin degree of freedom of an electron is used to store and process information, spintronics can provide numerous advantages over conventional electronics by providing new functionalities. In this article, we study spin relaxation in graphene nanoribbons (GNR) of armchair type by employing semiclassical Monte Carlo approach. D'yakonov-Perel' relaxation due to structural inversion asymmetry (Rashba spin-orbit coupling) and Elliott-Yafet (EY) relaxation cause spin dephasing in armchair graphene nanoribbons. We investigate spin relaxation in ?-,?- and ?-armchair GNR with varying width and temperature.
Greenman, G M; O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented.
Delta f Monte Carlo Calculation Of Neoclassical Transport In Perturbed Tokamaks
Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit; Boozer, Allen H.
2012-04-11
Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new delta#14;f particle code (POCA) has been developed for this purpose using a modi ed pitch angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in axisymmetric con guration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic fi eld spectrum, and compared with the generalized NTV theory. Calculations indicate a clear #14;B2 dependence of NTV, and good agreements with theory on NTV torque pro les and amplitudes depending on collisionality.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
Peraud, Jean-Philippe Michel
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized ...
Monte Carlo Study of Thermal Transport of Frequency and Direction Dependent Reflecting
Walker, D. Greg
the carbon and boron nitride nanotubes. Through phonon transport simulations we provide theoret- ical Starr [1] found that copper/cuprous oxide systems showed thermal as well as electrical rectifi- cation
Accelerator structure and beam transport system for the KEK photon factory injector
NASA Astrophysics Data System (ADS)
Sato, Isamu
1980-11-01
The injector is a 2.5 GeV electron linac which serves multiple purposes, being not only the injector for the various storage rings of the Photon Factory but also for the next planned project, the TRISTAN RING, and also as an intense electron or ?-ray source for research on phenomena in widely diverse scientific fields. The accelerator structure and beam transport system for the linac were designed with the greatest care in order to avoid beam blow-up difficulties, and also to be as suitable as possible to enable the economical mass production of the accelerator guides and focusing magnets.
Line photon transport in a non-homogeneous plasma using radiative coupling coefficients
NASA Astrophysics Data System (ADS)
Florido, R.; Gil, J. M.; Rodríguez, R.; Rubiano, J. G.; Martel, P.; Mínguez, E.
2006-06-01
We present a steady-state collisional-radiative (CR) model for the calculation of level populations in non-homogeneous plasmas with planar geometry. The line photon transport is taken into account following an angle- and frequency-averaged escape probability model. Several models where the same approach has been used can be found in the literature, but the main difference between our model and those ones is that the details of geometry are exactly treated in the definition of coupling coefficients and a local profile is taken into account in each plasma cell.
Taskaev, Sergey Yur'evich
inflow into cooling channels of the beam absorber. At near-threshold mode neutrons gener- ated haveOptimization of the target of an accelerator-driven neutron source through Monte Carlo numerical simulation of neutron and gamma transport by the PRIZMA code Ya. Kandiev a , E. Kashaeva a , G. Malyshkin
The energy band memory server algorithm for parallel Monte Carlo transport calculations
NASA Astrophysics Data System (ADS)
Felker, Kyle G.; Siegel, Andrew R.; Smith, Kord S.; Romano, Paul K.; Forget, Benoit
2014-06-01
An algorithm is developed to significantly reduce the on-node footprint of cross section memory in Monte Carlo particle tracking algorithms. The classic method of per-node replication of cross section data is replaced by a memory server model, in which the read-only lookup tables reside on a remote set of disjoint processors. The main particle tracking algorithm is then modified in such a way as to enable efficient use of the remotely stored data in the particle tracking algorithm. Results of a prototype code on a Blue Gene/Q installation reveal that the penalty for remote storage is reasonable in the context of time scales for real-world applications, thus yielding a path forward for a broad range of applications that are memory bound using current techniques.
Remarkable Moments in the History of Neutron Transport Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Lux, I.
Ladies and Gentlemen, Dear Colleagues, It is my utmost pleasure to talk to the acknowledged audience of this outstanding conference. Forgive me if I begin my talk with something personal. Quite a time ago, when I have made my first steps in the wonderland of Monte Carlo I have often dreamed about visiting the two capitals of this art: Novosibirsk in Russia and Los Alamos in the USA. I had the opportunity to see quite a few places in the World but somehow neither of these two. Now we all have gathered in Lisbon and you brought here in your head and in your papers all what is worth of knowing in this field of science, while strange enough, Lisbon is almost exactly in the midpoint between Novosibirsk and Los Alamos1. I would like to call to your attention the symbolic meaning of this.
{delta}f Monte Carlo calculation of neoclassical transport in perturbed tokamaks
Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit J.; Boozer, Allen H.
2012-08-15
Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new {delta}f particle orbit code (POCA) has been developed for this purpose using a modified pitch-angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in the axisymmetric configuration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and the results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic field spectrum, and compared with the combined and 1/{nu} NTV theory. Calculations indicate a clear {delta}B{sup 2} scaling of NTV, and good agreement with the theory on NTV torque profiles and amplitudes depending on collisionality.
A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport
Bal, Guillaume
, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane as the propagation of seismic waves in the solid Earth [11]. In this paper, we focus on the solution of the forward mounted on a airplane or a satellite. The integro-differential transport equation (1) may be solved
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.
Shen, Jung-Tsung; Fan, Shanhui
2007-04-13
We show that two-photon transport is strongly correlated in one-dimensional waveguide coupled to a two-level system. The exact S matrix is constructed using a generalized Bethe-ansatz technique. We show that the scattering eigenstates of this system include a two-photon bound state that passes through the two-level system as a composite single particle. Also, the two-level system can induce effective attractive or repulsive interactions in space for photons. This general procedure can be applied to the Anderson model as well. PMID:17501344
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Branching and path-deviation of positive streamers resulting from statistical photon transport
NASA Astrophysics Data System (ADS)
Xiong, Zhongmin; Kushner, Mark J.
2014-12-01
The branching and change in direction of propagation (path-deviation) of positive streamers in molecular gases such as air likely require a statistical process which perturbs the head of the streamer and produces an asymmetry in its space charge density. In this paper, the mechanisms for path-deviation and branching of atmospheric pressure positive streamer discharges in dry air are numerically investigated from the viewpoint of statistical photon transport and photoionization. A statistical photon transport model, based on randomly selected emitting angles and mean-free-path for absorption, was developed and embedded into a fluid-based plasma transport model. The hybrid model was applied to simulations of positive streamer coaxial discharges in dry air at atmospheric pressure. The results show that secondary streamers, often spatially isolated, are triggered by the random photoionization and interact with the thin space charge layer (SCL) of the primary streamer. This interaction may be partly responsible for path-deviation and streamer branching. The general process consists of random remote photo-electron production which initiates a back-traveling electron avalanche, collision of this secondary avalanche with the primary streamer and the subsequent perturbation to its SCL. When the SCL is deformed from a symmetric to an asymmetric shape, the streamer can experience an abrupt change in the direction of propagation. If the SCL is sufficiently perturbed and essentially broken, local maxima in the SCL can develop into new streamers, leading to streamer branching. During the propagation of positive streamers, this mechanism can take place repetitively in time and space, thus producing multi-level branching and more than two branches within one level.
Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
NASA Astrophysics Data System (ADS)
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
Ghita, G.; Sjoden, G. E.; Baciak, J. E.
2006-07-01
This paper discusses several topics related to our efforts to accomplish research leveraging 3-D radiation transport models to ultimately yield a complete mosaic of the radiation spectrum from a fissile source. Here, we effectively characterize and construct SNM neutron source terms, and demonstrate that the BUGLE-96 multigroup library is applicable to our general deterministic transport neutron detection scenarios. We also investigated our initial design of a moderated He-3 detector array. In performing our computations, we demonstrate how the PENTRAN parallel 3-D Sn code is quite efficient at parallel computation with several domain decomposition strategies, achieving a parallel (Amdahl) fraction of 0.96 on up to 16 dedicated processors while yielding our adjoint Sn transport results. Finally, we have established a procedure for analyzing He-3 response in a graded detector-moderator array, and are moving closer in our efforts to attribute a neutron spectrum from the resulting neutron responses in our graded moderator He-3 detector array, simulated entirely via computational methods for SNM sources of high interest. (authors)
Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.
Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred
2012-02-01
Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing. PMID:22217596
Gregory W. Faris; George Alexandrakis; David R. Busch; Michael S. Patterson
2001-01-01
We examine the ability to recover the optical properties of a two-layer turbid medium using multi-distance frequency domain reflectance measurements and a hybrid Monte Carlo-- diffusion model. Frequency domain measurements are performed on two-layer liquid tissue phantoms simulating skin on muscle and skin on fat. Particular care to systematic effects in the photomultiplier is required for the measurements at short
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Xu, X. George; Carothers, Christopher D.
2014-06-01
Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.
Self-Adjoint Angular Flux Equation for Coupled Electron-Photon Transport
Liscum-Powell, J.L.; Lorence, L.J. Jr.; Morel, J.E.; Prinja, A.K.
1999-07-08
Recently, Morel and McGhee described an alternate second-order form of the transport equation called the self adjoint angular flux (SAAF) equation that has the angular flux as its unknown. The SAAF formulation has all the advantages of the traditional even- and odd-parity self-adjoint equations, with the added advantages that it yields the full angular flux when it is numerically solved, it is significantly easier to implement reflective and reflective-like boundary conditions, and in the appropriate form it can be solved in void regions. The SAAF equation has the disadvantage that the angular domain is the full unit sphere and, like the even- and odd- parity form, S{sub n} source iteration cannot be implemented using the standard sweeping algorithm. Also, problems arise in pure scattering media. Morel and McGhee demonstrated the efficacy of the SAAF formulation for neutral particle transport. Here we apply the SAAF formulation to coupled electron-photon transport problems using multigroup cross-sections from the CEPXS code and S{sub n} discretization.
Coupled Deterministic/Monte Carlo Simulation of Radiation Transport and Detector Response
Gesh, Christopher J.; Meriwether, George H.; Pagh, Richard T.; Smith, Leon E.
2005-09-01
The analysis of radiation sensor systems used to detect and identify nuclear and radiological weapons materials requires detailed radiation transport calculations. Two basic steps are required to solve radiation detection scenario analysis (RDSA) problems. First, the radiation field produced by the source must be calculated. Second, the response that the radiation field produces in a detector must be determined. RDSA problems are characterized by complex geometries, the presence of shielding materials, and large amounts of scattering (or absorption/re-emission). In this paper, we will discuss the use of the Attila code [2] for RDSA.
Fan, Shanhui
Theory of single-photon transport in a single-mode waveguide. I. Coupling to a cavity containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford University-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-level atom, is analyzed
Fan, Shanhui
Theory of single-photon transport in a single-mode waveguide. II. Coupling to a whispering- gallery resonator containing a two-level atom Jung-Tsung Shen ( * and Shanhui Fan ( Ginzton Laboratory, Stanford interacting with a two-level atom. The single-photon transport properties such as the transmission
Galerkin-based meshless methods for photon transport in the biological tissue.
Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min
2008-12-01
As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170
Carrier transport through a dry-etched InP-based two-dimensional photonic crystal
Berrier, A.; Mulot, M.; Malm, G.; Oestling, M.; Anand, S.
2007-06-15
The electrical conduction across a two-dimensional photonic crystal (PhC) fabricated by Ar/Cl{sub 2} chemically assisted ion beam etching in n-doped InP is influenced by the surface potential of the hole sidewalls, modified by dry etching. Carrier transport across photonic crystal fields with different lattice parameters is investigated. For a given lattice period the PhC resistivity increases with the air fill factor and for a given air fill factor it increases as the lattice period is reduced. The measured current-voltage characteristics show clear ohmic behavior at lower voltages followed by current saturation at higher voltages. This behavior is confirmed by finite element ISE TCAD{sup TM} simulations. The observed current saturation is attributed to electric-field-induced saturation of the electron drift velocity. From the measured and simulated conductance for the different PhC fields we show that it is possible to determine the sidewall depletion region width and hence the surface potential. We find that at the hole sidewalls the etching induces a Fermi level pinning at about 0.12 eV below the conduction band edge, a value much lower than the bare InP surface potential. The results indicate that for n-InP the volume available for conduction in the etched PhCs approaches the geometrically defined volume as the doping is increased.
M. F. Preston; L. S. Myers; J. R. M. Annand; K. G. Fissum; K. Hansen; L. Isaksson; R. Jebali; M. Lundin
2013-11-22
Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.
NASA Astrophysics Data System (ADS)
Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio
2015-09-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.
Glaser, A; Zhang, R; Gladstone, D; Pogue, B
2014-06-01
Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.
Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang [State Key Lab for Mesoscopic Physics, School of Physics, Peking University, Beijing 100871 (China)
2011-07-15
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.
The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)
Rhoades, W.A.; Simpson, D.B.
1997-10-01
TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.
NASA Astrophysics Data System (ADS)
Milas, Peker; Gamari, Ben; Parrot, Louis; Buckman, Richard; Goldner, Lori
2011-11-01
Fluorescence resonance energy transfer (FRET) is a powerful experimental technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a faster time scale inaccessible to experiment. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers along with an explicit water solvent in our study of a 16mer double-stranded RNA. Cyanine dyes are attached at either the 3' or 5' ends with a three carbon linker, providing a basis for contrasting the dynamics of similar but not identical molecular structures.
NASA Astrophysics Data System (ADS)
Goldner, Lori
2012-02-01
Fluorescence resonance energy transfer (FRET) is a powerful technique for understanding the structural fluctuations and transformations of RNA, DNA and proteins. Molecular dynamics (MD) simulations provide a window into the nature of these fluctuations on a different, faster, time scale. We use Monte Carlo methods to model and compare FRET data from dye-labeled RNA with what might be predicted from the MD simulation. With a few notable exceptions, the contribution of fluorophore and linker dynamics to these FRET measurements has not been investigated. We include the dynamics of the ground state dyes and linkers in our study of a 16mer double-stranded RNA. Water is included explicitly in the simulation. Cyanine dyes are attached at either the 3' or 5' ends with a 3 carbon linker, and differences in labeling schemes are discussed.[4pt] Work done in collaboration with Peker Milas, Benjamin D. Gamari, and Louis Parrot.
NASA Astrophysics Data System (ADS)
Chishti, Sabiq; Ghosh, Bahniman; Bishnoi, Bhupesh
2015-02-01
We have analyzed the spin transport behaviour of four II–VI semiconductor nanowires by simulating spin polarized transport using a semi-classical Monte-Carlo approach. The different scattering mechanisms considered are acoustic phonon scattering, surface roughness scattering, polar optical phonon scattering, and spin flip scattering. The II–VI materials used in our study are CdS, CdSe, ZnO and ZnS. The spin transport behaviour is first studied by varying the temperature (4–500 K) at a fixed diameter of 10 nm and also by varying the diameter (8–12 nm) at a fixed temperature of 300 K. For II–VI compounds, the dominant mechanism is for spin relaxation; D'yakonovPerel and Elliot Yafet have been actively employed in the first order model to simulate the spin transport. The dependence of the spin relaxation length (SRL) on the diameter and temperature has been analyzed.
MCNP/X TRANSPORT IN THE TABULAR REGIME
HUGHES, H. GRADY
2007-01-08
The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
MCNP/X Transport in the Tabular Regime
Hughes, H. Grady
2007-03-19
We review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, we emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. We also briefly touch on the current situation in regard to photon, electron, and proton transport tables.
Angular biasing in implicit Monte-Carlo
Zimmerman, G.B.
1994-10-20
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.
Sikora, M; Dohm, O; Alber, M
2007-08-01
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification was performed for two cases: 6 MV head&neck and 15 MV prostate. The simulation is in agreement with film measurements within 2%/2 mm in the high dose regions (> or = 0.1 Gy = 5% D(max)) and 5%/2 mm in low dose regions (<0.1 Gy). PMID:17634643
Study of water transport phenomena on cathode of PEMFCs using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Soontrapa, Karn
This dissertation deals with the development of a three-dimensional computational model of water transport phenomena in the cathode catalyst layer (CCL) of PEMFCs. The catalyst layer in the numerical simulation was developed using the optimized sphere packing algorithm. The optimization technique named the adaptive random search technique (ARSET) was employed in this packing algorithm. The ARSET algorithm will generate the initial location of spheres and allow them to move in the random direction with the variable moving distance, randomly selected from the sampling range, based on the Lennard-jones potential of the current and new configuration. The solid fraction values obtained from this developed algorithm are in the range of 0.631 to 0.6384 while the actual processing time can significantly be reduced by 8% to 36% based on the number of spheres. The initial random number sampling range was investigated and the appropriate sampling range value is equal to 0.5. This numerically developed cathode catalyst layer has been used to simulate the diffusion processes of protons, in the form of hydronium, and oxygen molecules through the cathode catalyst layer. The movements of hydroniums and oxygen molecules are controlled by the random vectors and all of these moves has to obey the Lennard-Jones potential energy constrain. Chemical reaction between these two species will happen when they share the same neighborhood and result in the creation of water molecules. Like hydroniums and oxygen molecules, these newly-formed water molecules also diffuse through the cathode catalyst layer. It is important to investigate and study the distribution of hydronium oxygen molecule and water molecules during the diffusion process in order to understand the lifetime of the cathode catalyst layer. The effect of fuel flow rate on the water distribution has also been studied by varying the hydronium and oxygen molecule input. Based on the results of these simulations, the hydronium: oxygen input ratio of 3:2 has been found to be the best choice for this study. To study the effect of metal impurity and gas contamination on the cathode catalyst layer, the cathode catalyst layer structure is modified by adding the metal impurities and the gas contamination is introduced with the oxygen input. In this study, gas contamination has very little effect on the electrochemical reaction inside the cathode catalyst layer because this simulation is transient in nature and the percentage of the gas contamination is small, in the range of 0.0005% to 0.0015% for CO and 0.028% to 0.04% for CO2 . Metal impurities seem to have more effect on the performance of PEMFC because they not only change the structure of the developed cathode catalyst layer but also affect the movement of fuel and water product. Aluminum has the worst effect on the cathode catalyst layer structure because it yields the lowest amount of newly form water and the largest amount of trapped water product compared to iron of the same impurity percentage. For the iron impurity, it shows some positive effect on the life time of the cathode catalyst layer. At the 0.75 wt% of iron impurity, the amount of newly formed water is 6.59% lower than the pure carbon catalyst layer case but the amount of trapped water product is 11.64% lower than the pure catalyst layer. The lifetime of the impure cathode catalyst layer is longer than the pure one because the amount of water that is still trapped inside the pure cathode catalyst layer is higher than that of the impure one. Even though the impure cathode catalyst layer has a longer lifetime, it sacrifices the electrical power output because the electrochemical reaction occurrence inside the impure catalyst layer is lower.
Overview of the MCU Monte Carlo Software Package
NASA Astrophysics Data System (ADS)
Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.
2014-06-01
MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.
Vectorizing and macrotasking Monte Carlo neutral particle algorithms
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.
NASA Astrophysics Data System (ADS)
Fujii, Hiroyuki; Okawa, Shinpei; Nadamoto, Ken; Okada, Eiji; Yamada, Yukio; Hoshi, Yoko; Watanabe, Masao
2015-03-01
Accurate modeling and efficient calculation of photon migration in biological tissues is requested for determination of the optical properties of living tissues by in vivo experiments. This study develops a calculation scheme of photon migration for determination of the optical properties of the rat cerebral cortex (ca 0.2 cm thick) based on the three-dimensional time-dependent radiative transport equation assuming a homogeneous object. It is shown that the time-resolved profiles calculated by the developed scheme agree with the profiles measured by in vivo experiments using near infrared light. Also, an efficient calculation method is tested using the delta-Eddington approximation of the scattering phase function.
Islamian, Jalil Pirayesh; Toossi, Mohammad Taghi Bahreyni; Momennezhad, Mahdi; Zakavi, Seyyed Rasoul; Sadeghi, Ramin; Ljungberg, Michael
2012-05-01
In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of (99m)Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters' analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses. PMID:23372440
Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S
2014-12-01
Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992
NASA Astrophysics Data System (ADS)
Jaradat, Adnan Khalaf
The x ray leakage from the housing of a therapy x ray source is regulated to be <0.1% of the useful beam exposure at a distance of 1 m from the source. The x ray leakage in the backward direction has been measured from linacs operating at 4, 6, 10, 15, and 18 MV using a 100 cm3 ionization chamber and track-etch detectors. The leakage was measured at nine different positions over the rear wall using a 3 x 3 matrix with a 1 m separation between adjacent positions. In general, the leakage was less than the canonical value, but the exact value depends on energy, gantry angle, and measurement position. Leakage at 10 MV for some positions exceeded 0.1%. Electrons with energy greater than about 9 MeV have the ability to produce neutrons. Neutron leakage has been measured around the head of electron accelerators at a distance 1 m from the target at 0°, 46°, 90°, 135°, and 180° azimuthal angles; for electron energies of 9, 12, 15, 16, 18, and 20 MeV and 10, 15, and 18 MV x ray photon beam, using a neutron bubble detector of type BD-PND and using Track-Etch detectors. The highest neutron dose equivalent per unit electron dose was at 0° for all electron energies. The neutron leakage from photon beams was the highest between all the machines. Intensity modulated radiation therapy (IMRT) delivery consists of a summation of small beamlets having different weights that make up each field. A linear accelerator room designed exclusively for IMRT use would require different, probably lower, tenth value layers (TVL) for determining the required wall thicknesses for the primary barriers. The first, second, and third TVL of 60Co gamma rays and photons from 4, 6, 10, 15, and 18 MV x ray beams by concrete have been determined and modeled using a Monte Carlo technique (MCNP version 4C2) for cone beams of half-opening angles of 0°, 3°, 6°, 9°, 12°, and 14°.
NASA Astrophysics Data System (ADS)
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.
2014-09-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.
Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A
2014-01-01
Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839
NASA Astrophysics Data System (ADS)
Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.
2010-04-01
Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.
Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.
2008-02-01
Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Sempau, J.; Brualla, L.
2012-05-01
A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.
T. D. Jr
1996-01-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this
Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide, South Australia 5000 (Australia); Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide South Australia 5000 (Australia)
2013-07-15
Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.
Jung-Tsung Shen; Shanhui Fan
2009-01-26
The single-photon transport in a single-mode waveguide, coupled to a cavity embedded with a two-leval atom is analyzed. The single-photon transmission and reflection amplitudes, as well as the cavity and the atom excitation amplitudes, are solved exactly via a real-space approach. It is shown that the dissipation of the cavity and of the atom respectively affects distinctively on the transport properties of the photons, and on the relative phase between the excitation amplitudes of the cavity mode and the atom.
Bednarz, Bryan; Xu, X. George
2008-07-15
A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.
Nadar, M Y; Akar, D K; Patni, H K; Singh, I S; Mishra, L; Rao, D D; Pradeepkumar, K S
2014-12-01
In case of internal contamination due to long-lived actinides by inhalation or injection pathway, a major portion of activity will be deposited in the skeleton and liver over a period of time. In this study, calibration factors (CFs) of Phoswich and an array of HPGe detectors are estimated using skull and knee voxel phantoms. These phantoms are generated from International Commission of Radiation Protection reference male voxel phantom. The phantoms as well as 20 cm diameter phoswich, having 1.2 cm thick NaI (Tl) primary and 5cm thick CsI (Tl) secondary detector and an array of three HPGe detectors (each of diameter of 7 cm and thickness of 2.5 cm) are incorporated in Monte Carlo code 'FLUKA'. Biokinetic models of Pu, Am, U and Th are solved using default parameters to identify different parts of the skeleton where activity will accumulate after an inhalation intake of 1 Bq. Accordingly, CFs are evaluated for the uniform source distribution in trabecular bone and bone marrow (TBBM), cortical bone (CB) as well as in both TBBM and CB regions for photon energies of 18, 60, 63, 74, 93, 185 and 238 keV describing sources of (239)Pu, (241)Am, (238)U, (235)U and (232)Th. The CFs are also evaluated for non-uniform distribution of activity in TBBM and CB regions. The variation in the CFs for source distributed in different regions of the bones is studied. The assessment of skeletal activity of actinides from skull and knee activity measurements is discussed along with the errors. PMID:24435911
Johnson, J.O.
2000-10-23
The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.
Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2012-01-15
Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
Review of fast monte carlo codes for dose calculation in radiation therapy treatment planning.
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the 'fast' Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
Majumdar, Amit
developed. The first version is for the Tera Multi-Threaded Architecture (MTA) and uses Tera specific architectures targeted are the shared memory Tera MTA, the distributed memory Cray T3E, and the 8-way SMP IBM SP
Unidirectional Transport in Electronic and Photonic Weyl Materials by Dirac Mass Engineering
Ren Bi; Zhong Wang
2015-08-20
Weyl semimetals and photonic Weyl materials characterized by "magnetic monopoles" in the momentum space have been discovered in recent experiments. In this paper we study the topological effects of a Dirac mass, which couples two Weyl points with opposite chirality. We propose to create unidirectionally propagating modes along the vortex line of the complex Dirac mass in electronic and photonic Weyl materials. This approach is feasible and controllable, especially in photonic Weyl materials, in which the Dirac mass can be readily designed and manipulated. For (electronic) Weyl semimetals with interaction, we show in a lattice model that the desired Dirac mass can be spontaneously generated in a first-order transition.
Unidirectional Transport in Electronic and Photonic Weyl Materials by Dirac Mass Engineering
Bi, Ren
2015-01-01
Weyl semimetals and photonic Weyl materials characterized by "magnetic monopoles" in the momentum space have been discovered in recent experiments. In this paper we study the topological effects of a Dirac mass, which couples two Weyl points with opposite chirality. We propose to create unidirectionally propagating modes along the vortex line of the complex Dirac mass in electronic and photonic Weyl materials. This approach is feasible and controllable, especially in photonic Weyl materials, in which the Dirac mass can be readily designed and manipulated. For (electronic) Weyl semimetals with interaction, we show in a lattice model that the desired Dirac mass can be spontaneously generated in a first-order transition.
High-energy photon transport modeling for oil-well logging
Johnson, Erik D., Ph. D. Massachusetts Institute of Technology
2009-01-01
Nuclear oil well logging tools utilizing radioisotope sources of photons are used ubiquitously in oilfields throughout the world. Because of safety and security concerns, there is renewed interest in shifting to ...
Wang, Yu
2015-01-01
Photoelectric hot carrier generation in metal-semiconductor junctions allows for optical-to- electrical energy conversion at photon energies below the bandgap of the semiconductor. Which opens new opportunities in optical ...
NASA Astrophysics Data System (ADS)
Hai, Lian; Tan, Lei; Feng, Jin-Shan; Xu, Wen-Bin; Wang, Bin
2014-02-01
We discuss the effects of dissipation on the behavior of single photon transport in a system of coupled cavity arrays, with the two nearest cavities nonlocally coupled to a two-level atom. The single photon transmission amplitude is solved exactly by employing the quasi-boson picture. We investigate two different situations of local and nonlocal couplings, respectively. Comparing the dissipative case with the nondissipative one reveals that the dissipation of the system increases the middle dip and lowers the peak of the single photon transmission amplitudes, broadening the line width of the transport spectrum. It should be noted that the influence of the cavity dissipation to the single photon transport spectrum is asymmetric. By comparing the nonlocal coupling with the local one, one can find that the enhancement of the middle dip of single photon transmission amplitudes is mostly caused by the atom dissipation and that the reduced peak is mainly caused by the cavity dissipation, no matter whether it is a nonlocal or local coupling case. Whereas in the nonlocal coupling case, when the coupling strength gets stronger, the cavity dissipation has a greater effect on the single photon transport spectrum and the atom dissipation affection becomes weak, so it can be ignored.
NASA Astrophysics Data System (ADS)
Hissoiny, Sami
Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.
Hanazaki, Natalia
Programa de Pós-Graduação em Engenharia de Transportes e Gestão Territorial PPGTG Mestrado PPGTG 2015 A Coordenadoria do PPGTG - Programa de Pós-Graduação em Engenharia de Transportes e Gestão com a capacidade de orientadores do programa. A Comissão de Seleção se reservará o direito de aprovar
Hanazaki, Natalia
Programa de Pós-Graduação em Engenharia de Transportes e Gestão Territorial PPGTG Mestrado PPGTG PPGTG - Programa de Pós-Graduação em Engenharia de Transportes e Gestão Territorial da Universidade modelo padrão (modelo 2); Uma foto 3x4 recente; Cópia do CPF e do RG; #12;Programa de Pós-Graduação em
Relevance of accurate Monte Carlo modeling in nuclear medical imaging
Zaidi, H
1999-01-01
Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. This paper presents derivation and methodological basis for this approach and critically reviews their areas of application in nuclear imaging. An ...
S. Hughes; C. Roy
2012-01-12
We present a semiconductor master equation technique to study the input/output characteristics of coherent photon transport in a semiconductor waveguide-cavity system containing a single quantum dot. We use this approach to investigate the effects of photon propagation and anharmonic cavity-QED for various dot-cavity interaction strengths, including weakly-coupled, intermediately-coupled, and strongly-coupled regimes. We demonstrate that for mean photon numbers much less than 0.1, the commonly adopted weak excitation (single quantum) approximation breaks down, even in the weak coupling regime. As a measure of the anharmonic multiphoton-correlations, we compute the Fano factor and the correlation error associated with making a semiclassical approximation. We also explore the role of electron--acoustic-phonon scattering and find that phonon-mediated scattering plays a qualitatively important role on the light propagation characteristics. As an application of the theory, we simulate a conditional phase gate at a phonon bath temperature of $20 $K in the strong coupling regime.
NASA Astrophysics Data System (ADS)
Ma, Tzuhsuan; Khanikaev, Alexander B.; Mousavi, S. Hossein; Shvets, Gennady
2015-03-01
The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface.
NASA Astrophysics Data System (ADS)
Smith, John Lewis
A hybrid scheme is used to model the vapor phase deposition of thin films at the feature scale. The transport of the chemical species to the substrate surface is modeled with a Collisionless Direct Simulation Monte Carlo (DSMC) method. The Level Set Method is used to model the growth of the thin-film on the substrate. The convergence criteria for these methods were not found in literature. The governing equations for the Level Set Method are, in general, non-linear partial differential equations. The coupling of the DSMC Method with the Level Set Method results in a set of non-Gaussian stochastic non-linear partial differential equations. Developing general convergence criteria proved exceedingly difficult, and only qualitative results are presented to support our convergence criteria. Simulation results are in qualitative agreement with experiments and other results from literature.
Topographic Heterogeneity in Transdermal Transport Revealed by High-Speed Two-Photon Microscopy
So, Peter
cadaver skin/¯uorescent probe/ high-speed two-photon microscopy/oleic acid/skin sample size. J Invest spatial distributions across human cadaver skin has enabled the noninva- sive elucidation of oleic consecutive sites in excised human cadaver skin were captured to reveal the variability of transdermal
Budny, Robert
Abstract This is the user's manual for DEGAS 2 A Monte Carlo code for the study of neutral atom and molecular transport in confined plasmas. It is intended to provide an introduction to DEGAS 2 from the user in the typeset source code of the relevant pre processors; links to those files are provided. #12; #12; User
Budny, Robert
Abstract This is the user's manual for DEGAS 2 - A Monte Carlo code for the study of neutral atom and molecular transport in confined plasmas. It is intended to provide an introduction to DEGAS 2 from the user in the typeset source code of the relevant pre- processors; links to those files are provided. #12;#12;User
Fast Monte Carlo for radiation therapy: the PEREGRINE Project
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Diffusion-drift model of the transport of charge carriers and photons in injection lasers
NASA Astrophysics Data System (ADS)
Konoplev, B. G.; Ryndin, E. A.; Denisenko, M. A.
2015-06-01
A mathematical model is proposed that can be used for numerical analysis of the dynamics of processes in injection lasers with allowance for their structural features; nonuniform spatial distributions of electrons, holes, and photons; and various mechanisms of radiative recombination. Results of numerical modeling of double-heterostructure injection lasers performed using the proposed model are compared to the results obtained using the equations of laser kinetics. Boundaries of applicability of these models are considered.
Hanazaki, Natalia
Programa de Pós-Graduação em Engenharia de Transportes e Gestão Territorial PPGTG Mestrado PPGTG@contato.ufsc.br - http://ppgtg.posgrad.ufsc.br RESULTADO PROCESSO SELETIVO 2015 A Coordenadoria do Programa de Pós Suzelly Uliana Thiago Panchiniak Waldemar Barbosa de Lima Filho #12;Programa de Pós-Graduação em
The role of plasma evolution and photon transport in optimizing future advanced lithography sources
Sizyuk, Tatyana; Hassanein, Ahmed [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)] [Center for Materials Under Extreme Environment, School of Nuclear Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)
2013-08-28
Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20–30 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.
NASA Astrophysics Data System (ADS)
Härtle, R.; Cohen, G.; Reichman, D. R.; Millis, A. J.
2015-08-01
We give a detailed comparison of the hierarchical quantum master equation (HQME) method to a continuous-time quantum Monte Carlo (CT-QMC) approach, assessing the usability of these numerically exact schemes as impurity solvers in practical nonequilibrium calculations. We review the main characteristics of the methods and discuss the scaling of the associated numerical effort. We substantiate our discussion with explicit numerical results for the nonequilibrium transport properties of a single-site Anderson impurity. The numerical effort of the HQME scheme scales linearly with the simulation time but increases (at worst exponentially) with decreasing temperature. In contrast, CT-QMC is less restricted by temperature at short times, but in general the cost of going to longer times is also exponential. After establishing the numerical exactness of the HQME scheme, we use it to elucidate the influence of different ways to induce transport through the impurity on the initial dynamics, discuss the phenomenon of coherent current oscillations, known as current ringing, and explain the nonmonotonic temperature dependence of the steady-state magnetization as a result of competing broadening effects. We also elucidate the pronounced nonlinear magnetization dynamics, which appears on intermediate time scales in the presence of an asymmetric coupling to the electrodes.
PHOTONIC NANOJET PHOTONIC NANOJET
Poon, Andrew Wing On
PHOTONIC NANOJET SCANNING MICROSCOPY PHOTONIC NANOJET SCANNING MICROSCOPY Project Members: LEE Yi Final Year Project (2004 2005) #12;OVERVIEW Photonic nanojet Photonic nanojet measurement Conventional Photonic Nanojet Scanning MicroscopePhotonic Nanojet Scanning Microscope AFM tip scanning AFM tip scanning
Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)
2000-01-01
A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.
NASA Astrophysics Data System (ADS)
Wang, Ping; Hu, Linlin; Yang, Yintang; Shan, Xuefei; Song, Jiuxu; Guo, Lixin; Zhang, Zhiyong
2015-01-01
Transient characteristics of wurtzite Zn1-xMgxO are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn1-xMgxO (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 107, 2.133 × 107, 1.889 × 107, 1.295 × 107 cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn0.889Mg0.111O and Zn0.833Mg0.167O respectively, while it is not observed for Zn0.806Mg0.194O and Zn0.75Mg0.25O even at 3000 kV/cm which is especially important for high frequency devices.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
High-speed DC transport of emergent monopoles in spinor photonic fluids.
Terças, H; Solnyshkov, D D; Malpuech, G
2014-07-18
We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles. PMID:25083658
High-Speed DC Transport of Emergent Monopoles in Spinor Photonic Fluids
NASA Astrophysics Data System (ADS)
Terças, H.; Solnyshkov, D. D.; Malpuech, G.
2014-07-01
We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles.
SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S
2014-06-01
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.
Monte Carlo simulation of amorphous selenium imaging detectors
NASA Astrophysics Data System (ADS)
Fang, Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo
2010-04-01
We present a Monte Carlo (MC) simulation method for studying the signal formation process in amorphous Selenium (a-Se) imaging detectors for design validation and optimization of direct imaging systems. The assumptions and limitations of the proposed and previous models are examined. The PENELOPE subroutines for MC simulation of radiation transport are used to model incident x-ray photon and secondary electron interactions in the photoconductor. Our simulation model takes into account applied electric field, atomic properties of the photoconductor material, carrier trapping by impurities, and bimolecular recombination between drifting carriers. The particle interaction cross-sections for photons and electrons are generated for Se over the energy range of medical imaging applications. Since inelastic collisions of secondary electrons lead to the creation of electron-hole pairs in the photoconductor, the electron inelastic collision stopping power is compared for PENELOPE's Generalized Oscillator Strength model with the established EEDL and NIST ESTAR databases. Sample simulated particle tracks for photons and electrons in Se are presented, along with the energy deposition map. The PENEASY general-purpose main program is extended with custom transport subroutines to take into account generation and transport of electron-hole pairs in an electromagnetic field. The charge transport routines consider trapping and recombination, and the energy required to create a detectable electron-hole pair can be estimated from simulations. This modular simulation model is designed to model complete image formation.
NASA Astrophysics Data System (ADS)
Millman, David L.; Griesheimer, David P.; Nease, Brian R.; Snoeyink, Jack
2014-06-01
For large, highly detailed models, Monte Carlo simulations may spend a large fraction of their run-time performing simple point location and distance to surface calculations for every geometric component in a model. In such cases, the use of bounding boxes (axis-aligned boxes that bound each geometric component) can improve particle tracking efficiency and decrease overall simulation run time significantly. In this paper we present a robust and efficient algorithm for generating the numerically-optimal bounding box (optimal to within a user-specified tolerance) for an arbitrary Constructive Solid Geometry (CSG) object defined by quadratic surfaces. The new algorithm uses an iterative refinement to tighten an initial, conservatively large, bounding box into the numerically-optimal bounding box. At each stage of refinement, the algorithm subdivides the candidate bounding box into smaller boxes, which are classified as inside, outside, or intersecting the boundary of the component. In cases where the algorithm cannot unambiguously classify a box, the box is refined further. This process continues until the refinement near the component's extremal points reach the user-selected tolerance level. This refinement/classification approach is more efficient and practical than methods that rely on computing actual boundary representations or sampling to determine the extent of an arbitrary CSG component. A complete description of the bounding box algorithm is presented, along with a proof that the algorithm is guaranteed to converge to within specified tolerance of the true optimal bounding box. The paper also provides a discussion of practical implementation details for the algorithm as well as numerical results highlighting performance and accuracy for several representative CSG components.
Martino, G; Capasso, M; Nasuti, M; Bonanni, L; Onofrj, M; Thomas, A
2015-04-01
Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis. Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine. During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the "burst striatum" pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed. By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Updated version of the DOT 4 one- and two-dimensional neutron/photon transport code
Rhoades, W.A.; Childs, R.L.
1982-07-01
DOT 4 is designed to allow very large transport problems to be solved on a wide range of computers and memory arrangements. Unusual flexibilty in both space-mesh and directional-quadrature specification is allowed. For example, the radial mesh in an R-Z problem can vary with axial position. The directional quadrature can vary with both space and energy group. Several features improve performance on both deep penetration and criticality problems. The program has been checked and used extensively.
Transport properties of disordered photonic crystals around a Dirac-like point.
Wang, Xiao; Jiang, Haitao; Li, Yuan; Yan, Chao; Deng, Fusheng; Sun, Yong; Li, Yunhui; Shi, Yunlong; Chen, Hong
2015-02-23
At the Dirac-like point at the Brillouin zone center, the photonic crystals (PhCs) can mimic a zero-index medium. In the band structure, an additional flat band of longitudinal mode will intersect the Dirac cone. This longitudinal mode can be excited in PhCs with finite sizes at the Dirac-like point. By introducing positional shift in the PhCs, we study the dependence of the longitudinal mode on the disorder. At the Dirac-like point, the transmission peak induced by the longitudinal mode decreases as the random degree increases. However, at a frequency slightly above the Dirac-like point, in which the longitudinal mode is absent, the transmission is insensitive to the disorder because the effective index is still near zero and the effective wavelength in the PhC is very large. PMID:25836546
Independent pixel and Monte Carlo estimates of stratocumulus albedo
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN
1994-01-01
Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.
Monte Carlo techniques for neutron capture therapy
Wheeler, F.J. (Idaho National Engineering Lab., Idaho Falls (United States))
1991-01-01
At the Idaho National Engineering Laboratory, the current emphasis in neutron capture therapy (NCT) research is on treatment of glioblastoma multiforme using an administered {sup 10}B-containing drug followed by irradiation in an epithermal neutron beam. in appropriate subjects, the brain tumor selectively uptakes the {sup 10}B, and thermal-neutron flux generated in the tissue causes destruction of tumor cells via {sup 10}B (n,{alpha}){sup 7}Li reactions. Unlike applications of conventional photon therapy where simple methods can be used to calculate absorbed dose, NCT requires a rigorous three-dimensional solution of the Boltzmann transport equation for each unique application. This paper outlines new methods developed for the rtt-MC Monte Carlo code to address the unique requirements of NCT.
Scott, Alison J D; Nahum, Alan E; Fenwick, John D
2009-07-01
The accuracy with which Monte Carlo models of photon beams generated by linear accelerators (linacs) can describe small-field dose distributions depends on the modeled width of the electron beam profile incident on the linac target. It is known that the electron focal spot width affects penumbra and cross-field profiles; here, the authors explore the extent to which source occlusion reduces linac output for smaller fields and larger spot sizes. A BEAMnrc Monte Carlo linac model has been used to investigate the variation in penumbra widths and small-field output factors with electron spot size. A formalism is developed separating head scatter factors into source occlusion and flattening filter factors. Differences between head scatter factors defined in terms of in-air energy fluence, collision kerma, and terma are explored using Monte Carlo calculations. Estimates of changes in kerma-based source occlusion and flattening filter factors with field size and focal spot width are obtained by calculating doses deposited in a narrow 2 mm wide virtual "milliphantom" geometry. The impact of focal spot size on phantom scatter is also explored. Modeled electron spot sizes of 0.4-0.7 mm FWHM generate acceptable matches to measured penumbra widths. However the 0.5 cm field output factor is quite sensitive to electron spot width, the measured output only being matched by calculations for a 0.7 mm spot width. Because the spectra of the unscattered primary (psi(pi)) and head-scattered (psi(sigma)) photon energy fluences differ, miniphantom-based collision kerma measurements do not scale precisely with total in-air energy fluence psi = (psi(pi) + psi(sigma) but with (psi(pi)+ 1.2psi(sigma)). For most field sizes, on-axis collision kerma is independent of the focal spot size; but for a 0.5 cm field size and 1.0 mm spot width, it is reduced by around 7% mostly due to source occlusion. The phantom scatter factor of the 0.5 cm field also shows some spot size dependence, decreasing by 6% (relative) as spot size is increased from 0.1 to 1.0 mm. The dependence of small-field source occlusion and output factors on the focal spot size makes this a significant factor in Monte Carlo modeling of small (< 1 cm) fields. Changes in penumbra width with spot size are not sufficiently large to accurately pinpoint spot widths. Consequently, while Monte Carlo models based exclusively on large-field data can quite accurately predict small-field profiles and PDDs, in the absence of experimental methods of determining incident electron beam profiles it will remain necessary to measure small-field output factors, fine-tuning modeled spot sizes to ensure good matching between the Monte Carlo and the measured output factors. PMID:19673212
Scott, Alison J. D.; Nahum, Alan E.; Fenwick, John D. [Department of Physics, Clatterbridge Centre for Oncology, Clatterbridge Road, Wirral, Merseyside CH63 4JY, United Kingdom and Department of Physics, University of Liverpool, Liverpool L69 7ZE (United Kingdom); Department of Physics, Clatterbridge Centre for Oncology, Clatterbridge Road, Wirral, Merseyside CH63 4JY (United Kingdom); Department of Physics, University of Liverpool, Liverpool L69 7ZE (United Kingdom) and School of Cancer Studies, University of Liverpool, Liverpool L69 3GA (United Kingdom)
2009-07-15
The accuracy with which Monte Carlo models of photon beams generated by linear accelerators (linacs) can describe small-field dose distributions depends on the modeled width of the electron beam profile incident on the linac target. It is known that the electron focal spot width affects penumbra and cross-field profiles; here, the authors explore the extent to which source occlusion reduces linac output for smaller fields and larger spot sizes. A BEAMnrc Monte Carlo linac model has been used to investigate the variation in penumbra widths and small-field output factors with electron spot size. A formalism is developed separating head scatter factors into source occlusion and flattening filter factors. Differences between head scatter factors defined in terms of in-air energy fluence, collision kerma, and terma are explored using Monte Carlo calculations. Estimates of changes in kerma-based source occlusion and flattening filter factors with field size and focal spot width are obtained by calculating doses deposited in a narrow 2 mm wide virtual ''milliphantom'' geometry. The impact of focal spot size on phantom scatter is also explored. Modeled electron spot sizes of 0.4-0.7 mm FWHM generate acceptable matches to measured penumbra widths. However the 0.5 cm field output factor is quite sensitive to electron spot width, the measured output only being matched by calculations for a 0.7 mm spot width. Because the spectra of the unscattered primary ({Psi}{sub {Pi}}) and head-scattered ({Psi}{sub {Sigma}}) photon energy fluences differ, miniphantom-based collision kerma measurements do not scale precisely with total in-air energy fluence {Psi}=({Psi}{sub {Pi}}+{Psi}{sub {Sigma}}) but with ({Psi}{sub {Pi}}+1.2{Psi}{sub {Sigma}}). For most field sizes, on-axis collision kerma is independent of the focal spot size; but for a 0.5 cm field size and 1.0 mm spot width, it is reduced by around 7% mostly due to source occlusion. The phantom scatter factor of the 0.5 cm field also shows some spot size dependence, decreasing by 6% (relative) as spot size is increased from 0.1 to 1.0 mm. The dependence of small-field source occlusion and output factors on the focal spot size makes this a significant factor in Monte Carlo modeling of small (<1 cm) fields. Changes in penumbra width with spot size are not sufficiently large to accurately pinpoint spot widths. Consequently, while Monte Carlo models based exclusively on large-field data can quite accurately predict small-field profiles and PDDs, in the absence of experimental methods of determining incident electron beam profiles it will remain necessary to measure small-field output factors, fine-tuning modeled spot sizes to ensure good matching between the Monte Carlo and the measured output factors.
NASA Astrophysics Data System (ADS)
Lauterbach, Marc H.; Lehmann, Jörg; Rosenow, Ulf F.
1999-05-01
The Monte Carlo electron transport code EGS4 was benchmark tested against early experimental results derived by Freyberger. These consist of absolute depth ionization and depth dose curves measured at a pencil beam with sharp energy definition of nominally 4, 10 and 20 MeV electrons extracted from a Betatron. The Freyberger precision measurements have been made with a wide plane-parallel ionization chamber in a slab phantom for the materials PMMA, C, Al, Cu, and Pb. The bremsstrahlung and, in some experiments, the forward and backward directed contributions had been determined separately. The pencil-beam/wide-chamber geometry is equivalent, in respect to the measurement of depth ionization and depth dose curves, to the more common wide-beam/point-detector geometry. However, it requires the simulation of merely one pencil beam position and practically all particle histories contribute to the ionization in the wide air cavity. Thus a considerable amount of computing time is saved. We applied a prototype of the new electron transport code PRESTA II. The results of our simulation generally agree very well, even in absolute terms, with experiment. Small deviations are found at lower energies and high- Z materials. For low energies they may in part be explained by contamination with bremsstrahlung from the beam guide system and an overestimation of bremsstrahlung-production in the experiment. The simulation of a gas-filled chamber within a high- Z absorber block seems to produce small errors in the calculation of ionization for electrons of approximately 4 MeV. Larger deviations for high- Z materials are attributed to the employment of screened Rutherford cross sections which lead to an underprediction of ionization from backscattered particles. Backscatter was found to be sufficiently accurate in the simulation of the PMMA absorber.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2010-01-01
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PMID:20160682
Michael A. King; Weishi Xia; Daniel J. deVries; Tin-Su Pan; Benard J. Villegas; Seth Dahlberg; Benjamin M. W. Tsui; Michael H. Ljungberg; Hugh T. Morgan
1996-01-01
Background Significant hepatobiliary accumulation of technetium 99m-labeled cardiac perfusion agents has been shown to cause alterations\\u000a in the apparent localization of the agents in the cardiac walls. A Monte Carlo study was conducted to investigate the hypothesis\\u000a that the cardiac count changes are due to the inconsistencies in the projection data input to reconstruction, and that correction\\u000a of the causes of
NASA Astrophysics Data System (ADS)
Stephens, D. L.; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.
Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71 st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R 2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R 2 respectively.
Ali, F; Waker, A J; Waller, E J
2014-10-01
Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. PMID:24162375
Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.
2004-12-22
The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.
NASA Technical Reports Server (NTRS)
Stephens, D. L. Jr; Townsend, L. W.; Miller, J.; Zeitlin, C.; Heilbronn, L.
2002-01-01
Deep-space manned flight as a reality depends on a viable solution to the radiation problem. Both acute and chronic radiation health threats are known to exist, with solar particle events as an example of the former and galactic cosmic rays (GCR) of the latter. In this experiment Iron ions of 1A GeV are used to simulate GCR and to determine the secondary radiation field created as the GCR-like particles interact with a thick target. A NASA prepared food pantry locker was subjected to the iron beam and the secondary fluence recorded. A modified version of the Monte Carlo heavy ion transport code developed by Zeitlin at LBNL is compared with experimental fluence. The foodstuff is modeled as mixed nuts as defined by the 71st edition of the Chemical Rubber Company (CRC) Handbook of Physics and Chemistry. The results indicate a good agreement between the experimental data and the model. The agreement between model and experiment is determined using a linear fit to ordered pairs of data. The intercept is forced to zero. The slope fit is 0.825 and the R2 value is 0.429 over the resolved fluence region. The removal of an outlier, Z=14, gives values of 0.888 and 0.705 for slope and R2 respectively. c2002 COSPAR. Published by Elsevier Science Ltd. All rights reserved.
Giden, I. H. Yilmaz, D.; Turduev, M.; Kurt, H.; Çolak, E.; Ozbay, E.
2014-01-20
To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6?nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1?nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3?GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.
Karney, Charles
Abstract This is the user's manual for DEGAS 2 - A Monte Carlo code for the study of neutral atom and molecular transport in confined plasmas. It is intended to cover all aspects of DEGAS 2 from the user://charles.karney.info/biblio/stotler01.html #12;#12;User's Guide for DEGAS 2 CVS Revision : 1.2 1 Daren Stotler and Charles Karney
NASA Astrophysics Data System (ADS)
Zhang, Xiaofeng
2012-03-01
Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations.
Parallel Finite Element Electron-Photon Transport Analysis on 2-D Unstructured Mesh
Drumm, C.R.
1999-01-01
A computer code has been developed to solve the linear Boltzmann transport equation on an unstructured mesh of triangles, from a Pro/E model. An arbitriwy arrangement of distinct material regions is allowed. Energy dependence is handled by solving over an arbitrary number of discrete energy groups. Angular de- pendence is treated by Legendre-polynomial expansion of the particle cross sections and a discrete ordinates treatment of the particle fluence. The resulting linear system is solved in parallel with a preconditioned conjugate-gradients method. The solution method is unique, in that the space-angle dependence is solved si- multaneously, eliminating the need for the usual inner iterations. Electron cross sections are obtained from a Goudsrnit-Saunderson modifed version of the CEPXS code. A one-dimensional version of the code has also been develop@ for testing and development purposes.
NASA Astrophysics Data System (ADS)
Leyva, A.; Piñera, I.; Montaño, L. M.; Abreu, Y.; Cruz, C. M.
2008-08-01
During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.
Leyva, A.; Pinera, I.; Abreu, Y.; Cruz, C. M.; Montano, L. M.
2008-08-11
During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.
Sandborg, M; Dance, D R; Persliden, J; Carlsson, G A
1994-03-01
A Monte Carlo computer program has been developed for the simulation of X-ray photon transport in diagnostic X-ray examinations. The simulation takes account of the incident photon energy spectrum and includes a phantom (representing the patient), an anti-scatter grid and an image receptor. The primary objective for developing the program was to study and optimise the design of anti-scatter grids. The program estimates image quality in terms of contrast and signal-to-noise ratio, and radiation risk in terms of mean absorbed dose in the patient. It therefore serves as a tool for the optimisation of the radiographic procedure. A description is given of the program and the variance-reduction techniques used. The computational method was validated by comparison with measurements and other Monte Carlo simulations. PMID:8062549
NASA Astrophysics Data System (ADS)
Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.
2012-11-01
Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.
Bita, Ion
2006-01-01
Effects of breaking various symmetries on optical properties in ordered materials have been studied. Photonic crystals lacking space-inversion and time-reversal symmetries were shown to display nonreciprocal dispersion ...
Differential pencil beam dose computation model for photons.
Mohan, R; Chui, C; Lidofsky, L
1986-01-01
Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density. PMID:3951411
Nori, Franco
2013-01-01
for single incident photons using the scattering formalism based on the Lippmann-Schwinger equation. When on the single-photon scattering on a two-level system. We calculate the transmission and reflection coefficients will be involved in the single-photon scattering. Including the coupling to a higher traverse mode, we find
Monte Carlo methods on advanced computer architectures
Martin, W.R. [Univ. of Michigan, Ann Arbor, MI (United States)
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
NASA Astrophysics Data System (ADS)
Allaria, E.; Callegari, C.; Cocco, D.; Fawley, W. M.; Kiskinova, M.; Masciovecchio, C.; Parmigiani, F.
2010-07-01
FERMI@Elettra comprises two free electron lasers (FELs) that will generate short pulses (?~25-200 fs) of highly coherent radiation in the XUV and soft x-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability of producing high-quality, longitudinally coherent photon pulses. This capability, together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization, will open up new experimental opportunities that are not possible with currently available FELs. Here, we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.
Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio
2010-04-05
FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.
Lin, Shih-Hsien; Chen, Kao Chin; Lee, Sheng-Yu; Chiu, Nan Tsing; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Chen, Chia-Chieh; Liao, Mei-Hsiu; Yang, Yen Kuang
2015-03-30
One of the consequences of heroin dependency is a huge expenditure on drugs. This underlying economic expense may be a grave burden for heroin users and may lead to criminal behavior, which is a huge cost to society. The neuropsychological mechanism related to heroin purchase remains unclear. Based on recent findings and the established dopamine hypothesis of addiction, we speculated that expenditure on heroin and central dopamine activity may be associated. A total of 21 heroin users were enrolled in this study. The annual expenditure on heroin was assessed, and the availability of the dopamine transporter (DAT) was assessed by single-photon emission computed tomography (SPECT) using [(99m)TC]TRODAT-1. Parametric and nonparametric correlation analyses indicated that annual expenditure on heroin was significantly and negatively correlated with the availability of striatal DAT. After adjustment for potential confounders, the predictive power of DAT availability was significant. Striatal dopamine function may be associated with opioid purchasing behavior among heroin users, and the cycle of spiraling dysfunction in the dopamine reward system could play a role in this association. PMID:25659472
Lischinski, Dani
Photon Maps Photon Tracing Simulating light propagation by shooting photons from the light sources. Photon Tracing Storing the incidences of photon's path. Implementing surface properties statistically. Russian Roulette. Photon Tracing Photon maps keep: Incidence point (in 3D). The normal at that point
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.
2013-01-15
Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.
NASA Astrophysics Data System (ADS)
Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi
1994-01-01
The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-?s Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon Calorimeter Geometry * Simulations with EGS4/PRESTA for Thin Si Sampling Calorimeter * SIBERIA -- Monte Carlo Code for Simulation of Hadron-Nuclei Interactions * CALOR89 Predictions for the Hanging File Test Configurations * Estimation of the Multiple Coulomb Scattering Error for Various Numbers of Radiation Lengths * Monte Carlo Generator for Nuclear Fragmentation Induced by Pion Capture * Calculation and Randomization of Hadron-Nucleus Reaction Cross Section * Developments in GEANT Physics * Status of the MC++ Event Generator Toolkit * Theoretical Overview of QCD Event Generators * Random Numbers? * Simulation of the GEM LKr Barrel Calorimeter Using CALOR89 * Recent Improvement of the EGS4 Code, Implementation of Linearly Polarized Photon Scattering * Interior-Flux Simulation in Enclosures with Electron-Emitting Walls * Some Recent Developments in Global Determinations of Parton Distributions * Summary of the Workshop on Simulating Accelerator Radiation Environments * Simulating the SDC Radiation Background and Activation * Applications of Cluster Monte Carlo Method to Lattice Spin Models * PDFLIB: A Library of All Available Parton Density Functions of the Nucleon, the Pion and the Photon and the Corresponding ?s Calculations * DTUJET92: Sampling Hadron Production at Supercolliders * A New Model for Hadronic Interactions at Intermediate Energies for the FLUKA Code * Matrix Generator of Pseudo-Random Numbers * The OPAL Monte Carlo Production System * Monte Carlo Simulation of the Microstrip Gas Counter * Inner Detector Simulations in ATLAS * Simulation and Reconstruction in H1 Liquid Argon Calorimetry * Polarization Decomposition of Fluxes and Kinematics in ep Reactions * Towards Object-Oriented GEANT -- ProdiG Project * Parallel Processing of AMY Detector Simulation on Fujitsu AP1000 * Enigma: An Event Generator for Electron-Photon- or Pion-Induced Events in the ~1 GeV Region * SSCSIM: Development and Use by the Fermilab SDC Group * The GEANT-CALOR Interface
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Fan, Shanhui
2010-01-01
analytical derivations for one- and two-photon scattering matrix elements based on operator equations in the Schr¨odinger picture and apply the Lippmann-Schwinger formalism to calculate the reflection to perform logic operations [12] or form a diode [13]. Exact solutions of one- and two-photon scattering have
NASA Astrophysics Data System (ADS)
Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho
2015-07-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.
Monte Carlo Application ToolKit (MCATK)
NASA Astrophysics Data System (ADS)
Adams, Terry; Nolen, Steve; Sweezy, Jeremy; Zukaitis, Anthony; Campbell, Joann; Goorley, Tim; Greene, Simon; Aulwes, Rob
2014-06-01
The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results.
Modeling multileaf collimators with the PEREGRINE Monte Carlo
Albright, N; Fujino, D H; J Wieczorek
1999-03-01
Multileaf collimators (MLCs) are becoming increasingly important for beam shaping and intensity modulated radiation therapy (IMRT). Their unique design can introduce subtle effects in the patient/phantom dose distribution. The PEREGRINE 3D Monte Carlo dose calculation system predicts dose by implementing a full Monte Carlo simulation of the beam delivery and patient/phantom system. As such, it provides a powerful tool to explore dosimetric effects of MLC designs. We have installed a new MLC modeling package into PEREGRINE. This package simulates full photon and electron transport in the MLC and includes tongue-and-groove construction and curved or straight leaf ends in the leaf shape geometry. We tested the accuracy of the PEREGRINE MLC package by comparing PEREGRINE predictions with ion chamber, diode, and photographic film measurements taken with a Varian 2 1 OOC using 6 and 18 MV photon beams. Profile and depth dose measurements were made for the MLC configured into annulus and comb patterns. In all cases, PEREGRINE modeled these measurements to within experimental uncertainties. Our results demonstrate PEREGRINE's accuracy for modeling MLC characteristics, and suggest that PEREGRINE would be an ideal tool to explore issues such as (1) underdosing between leaves due to the ''tongue-and-groove'' effect when dose from multiple MLC patterns are added together, (2) radiation leakage in the bullnose region, and (3) dose under a single leaf due to scatter in the patient.
Monte Carlo analysis of pion contribution to absorbed dose from Galactic cosmic rays
NASA Astrophysics Data System (ADS)
Aghara, S. K.; Blattnig, S. R.; Norbury, J. W.; Singleterry, R. C.
2009-04-01
Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV-GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.
Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.
2009-01-01
Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.
Jukka Hiltunen; Kari K. Åkerman; Jyrki T. Kuikka; Kim A. Bergström; Christer Halldin; Tuomo Nikula; Pirkko Räsänen; Jari Tiihonen; Marko Vauhkonen; Jari Karhu; Jukka Kupila; Esko Länsimies; Lars Farde
1997-01-01
. Iodine-123 labelled 2?-carbomethoxy-3?-(4-iodophenyl) (nor-?-CIT) is an analogue of ?-CIT, which has high affinity to the\\u000a serotonin transporter. Initial single-photon emission tomography (SPET) studies with [123I]nor-?-CIT were performed in five healthy volunteers. In addition, its metabolism in plasma was investigated with gradient\\u000a high performance liquid chromatography. [123I]nor-?-CIT was prepared by a method which gave a specific radioactivity of more than
Nuclear data processing for energy release and deposition calculations in the MC21 Monte Carlo code
Trumbull, T. H. [Knolls Atomic Power Laboratory, PO Box 1072, Schenectady, NY 12301 (United States)
2013-07-01
With the recent emphasis in performing multiphysics calculations using Monte Carlo transport codes such as MC21, the need for accurate estimates of the energy deposition-and the subsequent heating - has increased. However, the availability and quality of data necessary to enable accurate neutron and photon energy deposition calculations can be an issue. A comprehensive method for handling the nuclear data required for energy deposition calculations in MC21 has been developed using the NDEX nuclear data processing system and leveraging the capabilities of NJOY. The method provides a collection of data to the MC21 Monte Carlo code supporting the computation of a wide variety of energy release and deposition tallies while also allowing calculations with different levels of fidelity to be performed. Detailed discussions on the usage of the various components of the energy release data are provided to demonstrate novel methods in borrowing photon production data, correcting for negative energy release quantities, and adjusting Q values when necessary to preserve energy balance. Since energy deposition within a reactor is a result of both neutron and photon interactions with materials, a discussion on the photon energy deposition data processing is also provided. (authors)
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
NASA Astrophysics Data System (ADS)
Vasdekis, A. E.; Scott, E. A.; Roke, S.; Hubbell, J. A.; Psaltis, D.
2013-07-01
Amphiphiles, under appropriate conditions, can self-assemble into nanoscale thin membrane vessels (vesicles) that encapsulate and hence protect and transport molecular payloads. Vesicles assemble naturally within cells but can also be artificially synthesized. In this article, we review the mechanisms and applications of light-field interactions with vesicles. By being associated with light-emitting entities (e.g., dyes, fluorescent proteins, or quantum dots), vesicles can act as imaging agents in addition to cargo carriers. Vesicles can also be optically probed on the basis of their nonlinear response, typically from the vesicle membrane. Light fields can be employed to transport vesicles by using optical tweezers (photon momentum) or can directly perturb the stability of vesicles and hence trigger the delivery of the encapsulated payload (photon energy). We conclude with emerging vesicle applications in biology and photochemical microreactors.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
NASA Technical Reports Server (NTRS)
1976-01-01
The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.
Burns, Kimberly A.
2009-08-01
The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.
Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System
Karim, Julia Abdul
2008-05-20
The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.
Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System
NASA Astrophysics Data System (ADS)
Karim, Julia Abdul
2008-05-01
The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.
New model for dwelling dose calculation using Monte Carlo integration.
Allam, K A
2009-02-01
A new methodology and computer model using Monte Carlo simulation for indoor dose calculation are developed. A room model of six rectangular slabs of finite thickness with door or window in each slab was used. Point-kernel photon transport model with self-absorption correction was applied for dose calculations. New software was designed and programmed using Pascal programming language, which was evaluated for standard room design. The calculated dose due to natural radionuclides in the concert walls has differences from the average model results of 0.21% for (238)U, 12.3% for (232)Th and 13.9% for (40)K; and the variability of specific dose rate with changing position density and composition of walls was studied. The new model has more flexibility for real dose calculation of any room structure and tailing, which is not given in the published models. PMID:19287012
Yuan Jiankui; Jette, David; Chen Weimin [ICT Radiotherapy, Livingston, New Jersey 07039 (United States); Department of Medical Physics, Rush University, Chicago, Illinois 60612 (United States); ICT Radiotherapy, Livingston, New Jersey 07039 (United States)
2008-09-15
A photon transport algorithm for fully three-dimensional radiotherapy treatment planning has been developed based on the discrete ordinates (S{sub N}) solution of the Boltzmann equation. The algorithm is characterized by orthogonal adaptive meshes, which place additional points where large gradients occur and a procedure to evaluate the collided flux using the representation of spherical harmonic expansion instead of the summation of the volume-weighted contribution from discrete angles. The Boltzmann equation was solved in the form of S{sub N} spatial, energy, and angular discretization with mitigation of ray effects by the first-collision source method. Unlike existing S{sub N} codes, which were designed for general purpose for multiparticle transport in areas such as nuclear engineering, our code is optimized for medical radiation transport. To validate the algorithm, several examples were employed to calculate the photon flux distribution. Numerical results show good agreement with the Monte Carlo calculations using EGSnrc.
Breakthroughs in Photonics 2009
Keller, Ursula
Breakthroughs in Photonics 2009 Breakthroughs in Photonics 2009 Coherent Photon Sources Ultrafast Photonics Nonlinear Photonics Terhertz Photonics Nano-Photonics Silicon Photonics Photonics Materials Bio-Photonics Magneto-Photonics Photovoltaics and Sensors Integrated Photonics Systems Photo
B. M. Abramov; P. N. Alexeev; Yu. A. Borodin; S. A. Bulychjov; I. A. Dukhovskoy; A. P. Krutenkova; V. V. Kulikov; M. A. Martemianov; M. A. Matsyuk; E. N. Turdakina; A. I. Khanov; S. G. Mashnik
2015-02-05
Momentum spectra of hydrogen isotopes have been measured at 3.5 deg from C12 fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.
NASA Astrophysics Data System (ADS)
Javadi, M.; Abdi, Y.
2015-08-01
Monte Carlo continuous time random walk simulation is used to study the effects of confinement on electron transport, in porous TiO2. In this work, we have introduced a columnar structure instead of the thick layer of porous TiO2 used as anode in conventional dye solar cells. Our simulation results show that electron diffusion coefficient in the proposed columnar structure is significantly higher than the diffusion coefficient in the conventional structure. It is shown that electron diffusion in the columnar structure depends both on the cross section area of the columns and the porosity of the structure. Also, we demonstrate that such enhanced electron diffusion can be realized in the columnar photo-electrodes with a cross sectional area of ˜1 ?m2 and porosity of 55%, by a simple and low cost fabrication process. Our results open up a promising approach to achieve solar cells with higher efficiencies by engineering the photo-electrode structure.
Cuplov, Vesna; Buvat, Iréne; Pain, Frédéric; Jan, Sébastien
2014-02-01
The Geant4 Application for Emission Tomography (GATE) is an advanced open-source software dedicated to Monte-Carlo (MC) simulations in medical imaging involving photon transportation (Positron emission tomography, single photon emission computed tomography, computed tomography) and in particle therapy. In this work, we extend the GATE to support simulations of optical imaging, such as bioluminescence or fluorescence imaging, and validate it against the MC for multilayered media standard simulation tool for biomedical optics in simple geometries. A full simulation set-up for molecular optical imaging (bioluminescence and fluorescence) is implemented in GATE, and images of the light distribution emitted from a phantom demonstrate the relevance of using GATE for optical imaging simulations. PMID:24522804
Exact and efficient solution of the radiative transport equation for the semi-infinite medium
Liemert, André; Kienle, Alwin
2013-01-01
An accurate and efficient solution of the radiative transport equation is proposed for modeling the propagation of photons in the three-dimensional anisotropically scattering half-space medium. The exact refractive index mismatched boundary condition is considered and arbitrary rotationally invariant scattering functions can be applied. The obtained equations are verified with Monte Carlo simulations in the steady-state, temporal frequency, and time domains resulting in an excellent agreement. PMID:23774820
Progressive Photon Mapping Toshiya Hachisuka
Kazhdan, Michael
Progressive Photon Mapping Toshiya Hachisuka UC San Diego Shinji Ogaki The University of Nottingham Henrik Wann Jensen UC San Diego Path tracing Bidirectional path tracing Metropolis light transport Photon mapping Progressive photon mapping Figure 1: A glass lamp illuminates a wall and generates a complex
Schauer, Petr
2007-01-01
The new extended Monte Carlo (MC) simulation method for photon transport in S(T)EM back scattered electron (BSE) scintillation detection systems of various shapes is presented in this paper. The method makes use of the random generation of photon emission from a scintillator luminescent centre and describes the trajectory of photons and the efficiency of their transport toward the photocathode of the photomultiplier tube. The paper explains a new algorithm for determining the position of interaction of the photon with the surface of the single crystal scintillator or of the light guide with nearly arbitrary shapes. Some examples of the utilization of the simulation method are also included, and conclusions for very simple edge-guided signal (EGS) scintillation detection systems made. The computer optimized design of the BSE scintillation detector for the S 4000 Hitachi SEM was chosen to demonstrate the capability of this MC simulation method. PMID:17957744
Gilchrist, James F.
Transport on the Current Injection Efficiency of InGaAsN Quantum-Well Lasers Jeng-Ya Yeh, Luke J. Mawst leakage. I. INTRODUCTION THE CURRENT injection efficiency of quantum-well (QW) lasers, defined, and Nelson Tansu Abstract--A theoretical and experimental study demonstrates that the current injection
Monte Carlo Modeling of Hot Electron Transport in Bulk AlAs, AlGaAs and GaAs at Room Temperature
NASA Astrophysics Data System (ADS)
Arabshahi, H.; Khalvati, M. R.; Rokn-Abadi, M. Rezaee
The results of an ensemble Monte Carlo simulation of electron drift velocity response on the application field in bulk AlAs, AlGaAs and GaAs are presented. All dominant scattering mechanisms in the structure considered have been taken into account. For all materials, it is found that electron velocity overshoot only occurs when the electric field is increased to a value above a certain critical field, unique to each material. This critical field is strongly dependent on the material parameters. Transient velocity overshoot has also been simulated, with the sudden application of fields up to 1600 kVm-1, appropriate to the gate-drain fields expected within an operational field effect transistor. The electron drift velocity relaxes to the saturation value of ~105 ms-1 within 4 ps, for all crystal structures. The steady state and transient velocity overshoot characteristics are in fair agreement with other recent calculations.
Dirac tensor with heavy photon
Bytev, V. V.; Kuraev, E. A., E-mail: kuraev@theor.jinr.ru [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics (Russian Federation); Scherbakova, E. S., E-mail: scherbak@mail.desy.de [Hamburg University (Germany)
2013-03-15
For the large-angle hard-photon emission by initial leptons in the process of high-energy annihilation of e{sup +}e{sup -} to hadrons, the Dirac tensor is obtained by taking the lowest-order radiative corrections into account. The case of large-angle emission of two hard photons by initial leptons is considered. In the final result, the kinematic case of collinear emission of hard photons and soft virtual and real photons is included; it can be used for the construction of Monte-Carlo generators.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
NSDL National Science Digital Library
Bryant, Joyce
A unit designed to increase students' knowledge and understanding of diesel and gasoline engines, providing an introduction for students interested in more specialized training in the automobile field and its scientific principles through math, science, and chemistry. It will also help students realize the importance of transportation, and will help them meet their needs in math through problem solving by dealing with materials in their world, letting them develop skills and techniques through hands-on experience. Includes more than 20 problems to solve.
NASA Astrophysics Data System (ADS)
Zhang, X.; Wen, J.; Sun, C.
Coupled thermal and carrier transports (electron/hole generation, recombination, diffusion and drifting) in laser photoetching of GaAs thin film is investigated. A new volumetric heating mechanism originating from SRH (Shockley-Read-Hall) non-radiative recombination and photon recycling is proposed and modeled based on recent experimental findings. Both volumetric SRH heating and Joule heating are found to be important in the carrier transport, as well as the etching process. SRH heating and Joule heating are primarily confined within the space-charge region, which is about 20 nm from the GaAs surface. The surface temperature rises rapidly as the laser intensity exceeds 105 W/m2. Below a laser intensity of 105 W/m2, the thermal effect is negligible. The etch rate is found to be dependent on the competition between photovoltaic and photothermal effects on surface potential. At high laser intensity, the etch rate is increased by more than 100%, due to SRH and Joule heating.
Treating electron transport in MCNP{sup trademark}
Hughes, H.G.
1996-12-31
The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. The theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.
Monte Carlo tools to supplement experimental microdosimetric spectra.
Chiriotti, S; Moro, D; Conte, V; Colautti, P; D'Agostino, E; Sterpin, E; Vynckier, S
2014-10-01
Tissue-equivalent proportional counters (TEPCs) are widely used in experimental microdosimetry for characterising the radiation quality in radiation protection and radiation therapy environments. Generally, TEPCs are filled with tissue-equivalent gas mixtures, at low gas pressure, to simulate tissue site sizes similar to the cell nucleus (1 or 2 µm). The TEPC response using Monte Carlo (MC) codes can be applied to supplement experimental measurements. Most of general-purpose MC codes currently available recourse to the condensed-history approach to model the electron transport and do not transport low-energy electrons (<1 keV), which can lead to systematic errors, especially in thin layers and in gas-condensed medium interfaces. In this work, a comparison between experimental microdosimetric spectra of (60)Co and (137)Cs radiation at different simulated sizes (from 1.0 to 3.0 ?m) in pure propane versus simulated spectra obtained with two general-purpose codes FLUKA and PENELOPE, which include a detailed simulation of electron-photon transport in arbitrary materials, including gases, is presented. PMID:24132390
Chang, Jui-Yung; Wang, Liping
2015-01-01
Coupled surface plasmon/phonon polaritons and hyperbolic modes are known to enhance radiative transport across nanometer vacuum gaps but usually require identical materials. It becomes crucial to achieve strong near-field energy transfer between dissimilar materials for applications like near-field thermophotovoltaic and thermal rectification. In this work, we theoretically demonstrate extraordinary near-field radiative transport between a nanostructured metamaterial emitter and a graphene-covered planar receiver. Strong near-field coupling with two orders of magnitude enhancement in the spectral heat flux is achieved at the gap distance of 20 nm. By carefully selecting the graphene chemical potential and doping levels of silicon nanohole emitter and silicon plate receiver, the total near-field radiative heat flux can reach about 500 times higher than the far-field blackbody limit between 400 K and 300 K. The physical mechanisms are elucidated by the near-field surface plasmon coupling with fluctuational elec...
Application of Monte Carlo methods in tomotherapy and radiation biophysics
NASA Astrophysics Data System (ADS)
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published experimental and theoretical studies for 60Co gamma-rays and low-energy x-rays. The reported studies provide new information about the potential biological consequences of diagnostic x-rays and selected gamma-emitting radioisotopes used in brachytherapy for the treatment of cancer. The proposed methodology is computationally efficient and may also be useful in proton therapy, space applications or internal dosimetry.
NASA Astrophysics Data System (ADS)
Madsen, J. R.; Akabani, G.
2014-05-01
The present state of modeling radio-induced effects at the cellular level does not account for the microscopic inhomogeneity of the nucleus from the non-aqueous contents (i.e. proteins, DNA) by approximating the entire cellular nucleus as a homogenous medium of water. Charged particle track-structure calculations utilizing this approximation are therefore neglecting to account for approximately 30% of the molecular variation within the nucleus. To truly understand what happens when biological matter is irradiated, charged particle track-structure calculations need detailed knowledge of the secondary electron cascade, resulting from interactions with not only the primary biological component—water--but also the non-aqueous contents, down to very low energies. This paper presents our work on a generic approach for calculating low-energy interaction cross-sections between incident charged particles and individual molecules. The purpose of our work is to develop a self-consistent computational method for predicting molecule-specific interaction cross-sections, such as the component molecules of DNA and proteins (i.e. nucleotides and amino acids), in the very low-energy regime. These results would then be applied in a track-structure code and thereby reduce the homogenous water approximation. The present methodology—inspired by seeking a combination of the accuracy of quantum mechanics and the scalability, robustness, and flexibility of Monte Carlo methods—begins with the calculation of a solution to the many-body Schrödinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the interaction processes, such as ionization and excitation. As a test of our model, the approach is applied to a water molecule in the same method as it would be applied to a nucleotide or amino acid and compared with the low-energy cross-sections from the GEANT4-DNA physics package of the Geant4 simulation toolkit for the energy ranges of 7 eV to 1 keV.
Burke, D.L.
1982-10-01
Studies of photon-photon collisions are reviewed with particular emphasis on new results reported to this conference. These include results on light meson spectroscopy and deep inelastic e..gamma.. scattering. Considerable work has now been accumulated on resonance production by ..gamma gamma.. collisions. Preliminary high statistics studies of the photon structure function F/sub 2//sup ..gamma../(x,Q/sup 2/) are given and comments are made on the problems that remain to be solved.
Heinisch, Howard L.; Singh, Bachu N.
2003-03-01
Within the last decade molecular dynamics simulations of displacement cascades have revealed that glissile clusters of self-interstitial crowdions are formed directly in cascades. Also, under various conditions, a crowdion cluster can change its Burgers vector and glide along a different close-packed direction. In order to incorporate the migration properties of crowdion clusters into analytical rate theory models, it is necessary to describe the reaction kinetics of defects that migrate one-dimensionally with occasional changes in their Burgers vector. To meet this requirement, atomic-scale kinetic Monte Carlo (KMC) simulations have been used to study the defect reaction kinetics of one-dimensionally migrating crowdion clusters as a function of the frequency of direction changes, specifically to determine the sink strengths for such one-dimensionally migrating defects. The KMC experiments are used to guide the development of analytical expressions for use in reaction rate theories and especially to test their validity. Excellent agreement is found between the results of KMC experiments and the analytical expressions derived for the transition from one-dimensional to three-dimensional reaction kinetics. Furthermore, KMC simulations have been performed to investigate the significant role of crowdion clusters in the formation and stability of void lattices. The necessity for both one-dimensional migration and Burgers vectors changes for achieving a stable void lattice is demonstrated.
NASA Technical Reports Server (NTRS)
Berger, M. J.; Seltzer, S. M.; Maeda, K.
1972-01-01
The penetration, diffusion and slowing down of electrons in a semi-infinite air medium has been studied by the Monte Carlo method. The results are applicable to the atmosphere at altitudes up to 300 km. Most of the results pertain to monoenergetic electron beams injected into the atmosphere at a height of 300 km, either vertically downwards or with a pitch-angle distribution isotropic over the downward hemisphere. Some results were also obtained for various initial pitch angles between 0 deg and 90 deg. Information has been generated concerning the following topics: (1) the backscattering of electrons from the atmosphere, expressed in terms of backscattering coefficients, angular distributions and energy spectra of reflected electrons, for incident energies T(o) between 2 keV and 2 MeV; (2) energy deposition by electrons as a function of the altitude, down to 80 km, for T(o) between 2 keV and 2 MeV; (3) the corresponding energy depostion by electron-produced bremsstrahlung, down to 30 km; (4) the evolution of the electron flux spectrum as function of the atmospheric depth, for T(o) between 2 keV and 20 keV. Energy deposition results are given for incident electron beams with exponential and power-exponential spectra.
National Photonics Skills Standard for Technicians.
ERIC Educational Resources Information Center
Center for Occupational Research and Development, Inc., Waco, TX.
This document defines "photonics" as the generation, manipulation, transport, detection, and use of light information and energy whose quantum unit is the photon. The range of applications of photonics extends from energy generation to detection to communication and information processing. Photonics is at the heart of today's communication…
COMET-PE: an incident fluence response expansion transport method for radiotherapy calculations.
Hayward, Robert M; Rahnema, Farzad
2013-05-21
Accurate dose calculation is a central component of radiotherapy treatment planning. A new method of dose calculation has been developed based on transport theory and validated by comparison to Monte Carlo methods. The coarse mesh transport method has been extended to allow coupled photon-electron transport in 3D. The method combines stochastic pre-computation with a deterministic solver to achieve high accuracy and precision. To enhance the method for radiotherapy calculations, a new angular basis was derived, and an analytical source treatment was developed. Validation was performed by comparison to DOSXYZnrc using a heterogeneous interface phantom composed of water, aluminum, and lung. Calculations of both kinetic energy released per unit mass and dose were compared. Good agreement was found with a maximum error and root mean square relative error of less than 1.5% for all cases. The results show that the new method achieves an accuracy comparable to Monte Carlo. PMID:23603734
Inhomogeneity Effects on Dose Deposition for Photon and Electron Beams
NASA Astrophysics Data System (ADS)
Yu, Xinsheng
1989-03-01
A long-standing problem in radiation therapy has been to correct the dose distributions for the presence of inhomogeneities. The availability of CT and MRI imaging for treatment planning has led to many new algorithms for making such corrections. Unfortunately, each of these methods shows a limited range of validity outside of which errors exceeding 10% may occur due to the assumptions made in the algorithm. In order for valid assumptions to be made, the physical processes involved in the perturbation effects of inhomogeneities on radiation dose deposition must be identified and understood. The work presented in this thesis is to achieve this goal. Inhomogeneity effects on photon dose deposition have been studied by means of experimental measurements and theoretical simulations. The results indicated that changes in atomic number could result in large changes in dose by perturbing the transport of the secondary electrons. Electron transport theory was then studied with the emphasis on the electron multiple scattering. The small angle approximation in the Fermi-Eyges theory and the assumption of semi-infinite slab geometry in current electron dose calculation algorithms were found to cause inaccurate prediction of dose in the vicinity of local inhomogeneities. Using the concept of mean path, a new multiray model has been derived, which is sensitive to local inhomogeneities and gives good agreement with Monte -Carlo simulations. Based on the understanding of both photon and electron transport, a new photon-electron cascade model is proposed for calculating photon dose deposition. The model explicitly includes the transport of the secondary charged particles and is applicable for the presence of inhomogeneities with different electron densities and atomic numbers.
Brodsky, S.J.
1988-07-01
Highlights of the VIIIth International Workshop on Photon-Photon Collisions are reviewed. New experimental and theoretical results were reported in virtually every area of ..gamma gamma.. physics, particularly in exotic resonance production and tests of quantum chromodynamics where asymptotic freedom and factorization theorems provide predictions for both inclusive and exclusive ..gamma gamma.. reactions at high momentum transfer. 73 refs., 12 figs.
Va`vra, J.
1995-10-01
J. Seguinot and T. Ypsilantis have recently described the theory and history of Ring Imaging Cherenkov (RICH) detectors. In this paper, I will expand on these excellent review papers, by covering the various photon detector designs in greater detail, and by including discussion of mistakes made, and detector problems encountered, along the way. Photon detectors are among the most difficult devices used in physics experiments, because they must achieve high efficiency for photon transport and for the detection of single photo-electrons. For gaseous devices, this requires the correct choice of gas gain in order to prevent breakdown and wire aging, together with the use of low noise electronics having the maximum possible amplification. In addition, the detector must be constructed of materials which resist corrosion due to photosensitive materials such as, the detector enclosure must be tightly sealed in order to prevent oxygen leaks, etc. The most critical step is the selection of the photocathode material. Typically, a choice must be made between a solid (CsI) or gaseous photocathode (TMAE, TEA). A conservative approach favors a gaseous photocathode, since it is continuously being replaced by flushing, and permits the photon detectors to be easily serviced (the air sensitive photocathode can be removed at any time). In addition, it can be argued that we now know how to handle TMAE, which, as is generally accepted, is the best photocathode material available as far as quantum efficiency is concerned. However, it is a very fragile molecule, and therefore its use may result in relatively fast wire aging. A possible alternative is TEA, which, in the early days, was rejected because it requires expensive CaF{sub 2} windows, which could be contaminated easily in the region of 8.3 eV and thus lose their UV transmission.
Monte Carlo learning/biasing experiment with intelligent random numbers
Booth, T.E.
1985-01-01
A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs.
Validation of GATE Monte Carlo simulations of the GE Advance\\/Discovery LS PET scanners
C. Ross Schmidtlein; Assen S. Kirov; Sadek A. Nehmeh; Yusuf E. Erdi; John L. Humm; Howard I. Amols; Luc M. Bidautb; Alex Ganin; Charles W. Stearns; David L. McDaniel; Klaus A. Hamacher
2006-01-01
The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of
Prompt-photon production in DIS
Matthew Forrest
2009-09-22
Prompt-photon cross sections in deep inelastic ep scattering were measured with the ZEUS detector at HERA using an integrated luminosity of 320pb^-1. Measurements of differential cross sections are presented for inclusive prompt-photon production as a function of Q^2, x, E_T and eta. Perturbative QCD predictions and Monte Carlo predictions are compared to the measurements.
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in a multiple scattering medium is critically dependent on scattering statistics, in particular the average number of scatterings. A superposition technique is derived to accurately determine the average number of scatterings encountered by reflected and transmitted photons within arbitrary layers in plane-parallel, vertically inhomogeneous clouds. As expected, the resulting scattering number profiles are highly dependent on cloud particle absorption and solar/viewing geometry. The technique uses efficient adding and doubling radiative transfer procedures, avoiding traditional time-intensive Monte Carlo methods. Derived superposition formulae are applied to a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Cloud remote sensing techniques that use solar reflectance or transmittance measurements generally assume a homogeneous plane-parallel cloud structure. The scales over which this assumption is relevant, in both the vertical and horizontal, can be obtained from the superposition calculations. Though the emphasis is on photon transport in clouds, the derived technique is applicable to any scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers in the atmosphere.
Structure of Competitive Transit Networks Carlos F. Daganzo
Kammen, Daniel M.
Structure of Competitive Transit Networks Carlos F. Daganzo WORKING PAPER UCB-ITS-VWP-2009-6 August 2009 #12;STRUCTURE OF COMPETITIVE TRANSIT NETWORKS Carlos F. Daganzo Institute of Transportation the network shapes and operating characteristics that allow a transit system to deliver a level of service
Quirk, Thomas, J., IV (University of New Mexico)
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
NASA Astrophysics Data System (ADS)
Uehara, S.; Hoshi, M.; Yamamoto, O.; Sawada, S.; Kobayashi, K.; Maesawa, H.; Furusawa, Y.; Hieda, K.; Yamada, T.
Calculations of percentages escaped from incident energy fluence for soft X-rays were undertaken using a Monte Carlo code which simulates the photon transport exactly. The calculations were improved by taking into account correction factors for photon angular distributions and multiple scattering processes. A reduction of coherent backscatter and enhancement of fluorescence backscatter for lower energies were observed for 3 mm thick Si slab which was comparable to the previous analytical calculations by Greening and Randle. The total escape percentages of incident energy lost from the Fricke solution (1 × 2 cm area and 1 cm thick) were determined to be 0.61% for 8.856 keV and 12.3% for 13.55 keV. Using this calculated percentage of the measured energy fluence as a correction factor, absorbed dose in the Fricke solution, accordingly ferrous sulphate G-values too, were determined with a high accuracy.
Variance Reduction Techniques for Implicit Monte Carlo Simulations
Landman, Jacob Taylor
2013-09-19
The Implicit Monte Carlo (IMC) method is widely used for simulating thermal radiative transfer and solving the radiation transport equation. During an IMC run a grid network is constructed and particles are sourced into the problem to simulate...
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION
A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...
Diffractive production of isolated photons at HERA
Peter Bussey; for the ZEUS Collaboration
2015-07-14
The ZEUS detector at HERA has been used to measure the photoproduction of isolated photons in diffractive events. Cross sections are evaluated in the photon transverse-energy and pseudorapidity ranges 5 energy an pseudorapidity in the ranges 4 energy and of the colourless exchange ("Pomeron") energy that are imparted to a photon-jet final state. Comparison is made to predictions from the RAPGAP Monte Carlo simulation.
Flexible scalable photonic manufacturing method
NASA Astrophysics Data System (ADS)
Skunes, Timothy A.; Case, Steven K.
2003-06-01
A process for flexible, scalable photonic manufacturing is described. Optical components are actively pre-aligned and secured to precision mounts. In a subsequent operation, the mounted optical components are passively placed onto a substrate known as an Optical Circuit Board (OCB). The passive placement may be either manual for low volume applications or with a pick-and-place robot for high volume applications. Mating registration features on the component mounts and the OCB facilitate accurate optical alignment. New photonic circuits may be created by changing the layout of the OCB. Predicted yield data from Monte Carlo tolerance simulations for two fiber optic photonic circuits is presented.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
NASA Astrophysics Data System (ADS)
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
TEXASTRANS TEXAS TRANSPORTATION HALL HONOR OF HALL HONOR OF TEXASTRAN HALL HONOR OF TEXASTRAN HALL HONOR OF Inductees #12;2 TEXAS TRANSPORTATION HALL HONOR OF L NOR OF Texas is recognized as having one of the finest multimodal transportation systems in the world. The existence of this system has been key
Dibyendu Roy
2010-07-13
We propose a novel scheme of realizing an optical diode at the few-photon level. The system consists of a one-dimensional waveguide coupled asymmetrically to a two-level system. The two or multi-photon transport in this system is strongly correlated. We derive exactly the single and two-photon current and show that the two-photon current is asymmetric for the asymmetric coupling. Thus the system serves as an optical diode which allows transmission of photons in one direction much more efficiently than the opposite.
GMC: a GPU implementation of a Monte Carlo dose calculation based on Geant4.
Jahnke, Lennart; Fleckenstein, Jens; Wenz, Frederik; Hesser, Jürgen
2012-03-01
We present a GPU implementation called GMC (GPU Monte Carlo) of the low energy (<100 GeV) electromagnetic part of the Geant4 Monte Carlo code using the NVIDIA® CUDA programming interface. The classes for electron and photon interactions as well as a new parallel particle transport engine were implemented. The way a particle is processed is not in a history by history manner but rather by an interaction by interaction method. Every history is divided into steps that are then calculated in parallel by different kernels. The geometry package is currently limited to voxelized geometries. A modified parallel Mersenne twister was used to generate random numbers and a random number repetition method on the GPU was introduced. All phantom results showed a very good agreement between GPU and CPU simulation with gamma indices of >97.5% for a 2%/2?mm gamma criteria. The mean acceleration on one GTX 580 for all cases compared to Geant4 on one CPU core was 4860. The mean number of histories per millisecond on the GPU for all cases was 658 leading to a total simulation time for one intensity-modulated radiation therapy dose distribution of 349 s. In conclusion, Geant4-based Monte Carlo dose calculations were significantly accelerated on the GPU. PMID:22330587
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554
Development and validation of MCNPX-based Monte Carlo treatment plan verification system.
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh Sun, Mingshan; Abel, Eric; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca
2014-03-15
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 10{sup 7} ? 10{sup 9} detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 10{sup 7} reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 10{sup 6} CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities.
Rapid Monte Carlo simulation of detector DQE(f)
Star-Lack, Josh; Sun, Mingshan; Meyer, Andre; Morf, Daniel; Constantin, Dragos; Fahrig, Rebecca; Abel, Eric
2014-01-01
Purpose: Performance optimization of indirect x-ray detectors requires proper characterization of both ionizing (gamma) and optical photon transport in a heterogeneous medium. As the tool of choice for modeling detector physics, Monte Carlo methods have failed to gain traction as a design utility, due mostly to excessive simulation times and a lack of convenient simulation packages. The most important figure-of-merit in assessing detector performance is the detective quantum efficiency (DQE), for which most of the computational burden has traditionally been associated with the determination of the noise power spectrum (NPS) from an ensemble of flood images, each conventionally having 107 ? 109 detected gamma photons. In this work, the authors show that the idealized conditions inherent in a numerical simulation allow for a dramatic reduction in the number of gamma and optical photons required to accurately predict the NPS. Methods: The authors derived an expression for the mean squared error (MSE) of a simulated NPS when computed using the International Electrotechnical Commission-recommended technique based on taking the 2D Fourier transform of flood images. It is shown that the MSE is inversely proportional to the number of flood images, and is independent of the input fluence provided that the input fluence is above a minimal value that avoids biasing the estimate. The authors then propose to further lower the input fluence so that each event creates a point-spread function rather than a flood field. The authors use this finding as the foundation for a novel algorithm in which the characteristic MTF(f), NPS(f), and DQE(f) curves are simultaneously generated from the results of a single run. The authors also investigate lowering the number of optical photons used in a scintillator simulation to further increase efficiency. Simulation results are compared with measurements performed on a Varian AS1000 portal imager, and with a previously published simulation performed using clinical fluence levels. Results: On the order of only 10–100 gamma photons per flood image were required to be detected to avoid biasing the NPS estimate. This allowed for a factor of 107 reduction in fluence compared to clinical levels with no loss of accuracy. An optimal signal-to-noise ratio (SNR) was achieved by increasing the number of flood images from a typical value of 100 up to 500, thereby illustrating the importance of flood image quantity over the number of gammas per flood. For the point-spread ensemble technique, an additional 2× reduction in the number of incident gammas was realized. As a result, when modeling gamma transport in a thick pixelated array, the simulation time was reduced from 2.5 × 106 CPU min if using clinical fluence levels to 3.1 CPU min if using optimized fluence levels while also producing a higher SNR. The AS1000 DQE(f) simulation entailing both optical and radiative transport matched experimental results to within 11%, and required 14.5 min to complete on a single CPU. Conclusions: The authors demonstrate the feasibility of accurately modeling x-ray detector DQE(f) with completion times on the order of several minutes using a single CPU. Convenience of simulation can be achieved using GEANT4 which offers both gamma and optical photon transport capabilities. PMID:24593734
NASA Astrophysics Data System (ADS)
Wiersma, Diederik S.
2013-03-01
What do lotus flowers have in common with human bones, liquid crystals with colloidal suspensions, and white beetles with the beautiful stones of the Taj Mahal? The answer is they all feature disordered structures that strongly scatter light, in which light waves entering the material are scattered several times before exiting in random directions. These randomly distributed rays interfere with each other, leading to interesting, and sometimes unexpected, physical phenomena. This Review describes the physics behind the optical properties of disordered structures and how knowledge of multiple light scattering can be used to develop new applications. The field of disordered photonics has grown immensely over the past decade, ranging from investigations into fundamental topics such as Anderson localization and other transport phenomena, to applications in imaging, random lasing and solar energy.
Petrokokkinos, L.; Zourari, K.; Pantelis, E.; Moutsatsos, A.; Karaiskos, P.; Sakelliou, L.; Seimenis, I.; Georgiou, E.; Papagiannis, P.
2011-04-15
Purpose: The aim of this work is the dosimetric validation of a deterministic radiation transport based treatment planning system (BRACHYVISION v. 8.8, referred to as TPS in the following) for multiple {sup 192}Ir source dwell position brachytherapy applications employing a shielded applicator in homogeneous water geometries. Methods: TPS calculations for an irradiation plan employing seven VS2000 {sup 192}Ir high dose rate (HDR) source dwell positions and a partially shielded applicator (GM11004380) were compared to corresponding Monte Carlo (MC) simulation results, as well as experimental results obtained using the VIP polymer gel-magnetic resonance imaging three-dimensional dosimetry method with a custom made phantom. Results: TPS and MC dose distributions were found in agreement which is mainly within {+-}2%. Considerable differences between TPS and MC results (greater than 2%) were observed at points in the penumbra of the shields (i.e., close to the edges of the ''shielded'' segment of the geometries). These differences were experimentally verified and therefore attributed to the TPS. Apart from these regions, experimental and TPS dose distributions were found in agreement within 2 mm distance to agreement and 5% dose difference criteria. As shown in this work, these results mark a significant improvement relative to dosimetry algorithms that disregard the presence of the shielded applicator since the use of the latter leads to dosimetry errors on the order of 20%-30% at the edge of the ''unshielded'' segment of the geometry and even 2%-6% at points corresponding to the potential location of the target volume in clinical applications using the applicator (points in the unshielded segment at short distances from the applicator). Conclusions: Results of this work attest the capability of the TPS to accurately account for the scatter conditions and the increased attenuation involved in HDR brachytherapy applications employing multiple source dwell positions and partially shielded applicators.
ERIC Educational Resources Information Center
ROESSEL, ROBERT A., JR.
THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
Accelerating mesh-based Monte Carlo method on modern CPU architectures
Fang, Qianqian; Kaeli, David R.
2012-01-01
In this report, we discuss the use of contemporary ray-tracing techniques to accelerate 3D mesh-based Monte Carlo photon transport simulations. Single Instruction Multiple Data (SIMD) based computation and branch-less design are exploited to accelerate ray-tetrahedron intersection tests and yield a 2-fold speed-up for ray-tracing calculations on a multi-core CPU. As part of this work, we have also studied SIMD-accelerated random number generators and math functions. The combination of these techniques achieved an overall improvement of 22% in simulation speed as compared to using a non-SIMD implementation. We applied this new method to analyze a complex numerical phantom and both the phantom data and the improved code are available as open-source software at http://mcx.sourceforge.net/mmc/. PMID:23243572
Transport and acceleration of particles in astrophysical plasmas.
NASA Astrophysics Data System (ADS)
Gieseler, U. D. J.
The acceleration of high energy non-thermal cosmic rays depends crucially on their transport in configuration and momentum space. This transport results for photons in a Boltzmann equation. The problem of production of high energy photons in a hot plasma disk is formulated and the corresponding integro-differential equation is solved. Solutions are obtained allowing for an arbitrary anisotropy of the source function, with no restriction on the optical depth of the disk. In addition to the spectral index, the method of solution allows one to determine the spatial and angular dependence of the emergent radiation. In the case of an optically thin disk, this radiation is strongly collimated along the disk surface. The acceleration of charged particles is determined by the external electromagnetic fields. Supernova remnants are expected to contain stochastic magnetic fields, which are in some regions directed perpendicular to the shock normal. Recent analytical results show that the resulting anomalous transport leads to a steeper spectrum of cosmic rays, than pure diffusive transport. A Monte Carlo method is used to examine the transport and the acceleration. The results are discussed and compared with analytical treatments, the region of validity, of which is determined.