For comprehensive and current results, perform a real-time search at Science.gov.

1

AMOS An effective tool for adjoint Monte Carlo photon transport

NASA Astrophysics Data System (ADS)

In order to expand the photon version of the Monte Carlo transport program AMOS to an adjoint photon version, AMOS Pt, the theory of adjoint radiation transport is reviewed and evaluated in this regard. All relevant photon interactions, photoelectric effect, coherent scattering, incoherent scattering and pair production, are taken into account as proposed in the EPDL 97. In order to simulate pair production and to realise physical source terms with discrete energy levels, an energy point detector is used. To demonstrate the qualification of AMOS Pt a simple air-over-ground problem is simulated by both the forward and the adjoint programs. The results are compared and total agreement is shown.

Gabler, Dorothea; Henniger, Jürgen; Reichelt, Uwe

2006-10-01

2

Condensed history Monte Carlo methods for photon transport problems

We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions. PMID:18548128

Bhan, Katherine; Spanier, Jerome

2007-01-01

3

A Monte Carlo method using octree structure in photon and electron transport

Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that with electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.

Ogawa, K.; Maeda, S. [Hosei Univ., Tokyo (Japan)] [Hosei Univ., Tokyo (Japan)

1995-12-01

4

A GPU implementation of EGSnrc's Monte Carlo photon transport for imaging applications.

EGSnrc is a well-known Monte Carlo simulation package for coupled electron-photon transport that is widely used in medical physics application. This paper proposes a parallel implementation of the photon transport mechanism of EGSnrc for graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). The implementation is specifically designed for imaging applications in the diagnostic energy range and does not model electrons. No approximations or simplifications of the original EGSnrc code were made other than using single floating-point precision instead of double precision and a different random number generator. To avoid performance penalties due to the random nature of the Monte Carlo method, the simulation was divided into smaller steps that could easily be performed in a parallel fashion suitable for GPUs. Speedups of 20 to 40 times for 64(3) to 256(3) voxels were observed while the accuracy of the simulation was preserved. A detailed analysis of the differences between the CUDA simulation and the original EGSnrc was conducted. The two simulations were found to produce equivalent results for scattered photons and an overall systematic deviation of less than 0.08% was observed for primary photons. PMID:22025188

Lippuner, Jonas; Elbakri, Idris A

2011-11-21

5

A GPU implementation of EGSnrc's Monte Carlo photon transport for imaging applications

NASA Astrophysics Data System (ADS)

EGSnrc is a well-known Monte Carlo simulation package for coupled electron-photon transport that is widely used in medical physics application. This paper proposes a parallel implementation of the photon transport mechanism of EGSnrc for graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). The implementation is specifically designed for imaging applications in the diagnostic energy range and does not model electrons. No approximations or simplifications of the original EGSnrc code were made other than using single floating-point precision instead of double precision and a different random number generator. To avoid performance penalties due to the random nature of the Monte Carlo method, the simulation was divided into smaller steps that could easily be performed in a parallel fashion suitable for GPUs. Speedups of 20 to 40 times for 643 to 2563 voxels were observed while the accuracy of the simulation was preserved. A detailed analysis of the differences between the CUDA simulation and the original EGSnrc was conducted. The two simulations were found to produce equivalent results for scattered photons and an overall systematic deviation of less than 0.08% was observed for primary photons.

Lippuner, Jonas; Elbakri, Idris A.

2011-11-01

6

Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

2014-06-01

7

TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

Cullen, D.E.

1997-11-22

8

MCNP: a general Monte Carlo code for neutron and photon transport

MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.

Forster, R.A.; Godfrey, T.N.K.

1985-01-01

9

ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2008-04-01

10

COMET-PE as an Alternative to Monte Carlo for Photon and Electron Transport

NASA Astrophysics Data System (ADS)

Monte Carlo methods are a central component of radiotherapy treatment planning, shielding design, detector modeling, and other applications. Long calculation times, however, can limit the usefulness of these purely stochastic methods. The coarse mesh method for photon and electron transport (COMET-PE) provides an attractive alternative. By combining stochastic pre-computation with a deterministic solver, COMET-PE achieves accuracy comparable to Monte Carlo methods in only a fraction of the time. The method's implementation has been extended to 3D, and in this work, it is validated by comparison to DOSXYZnrc using a photon radiotherapy benchmark. The comparison demonstrates excellent agreement; of the voxels that received more than 10% of the maximum dose, over 97.3% pass a 2% / 2mm acceptance test and over 99.7% pass a 3% / 3mm test. Furthermore, the method is over an order of magnitude faster than DOSXYZnrc and is able to take advantage of both distributed-memory and shared-memory parallel architectures for increased performance.

Hayward, Robert M.; Rahnema, Farzad

2014-06-01

11

Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ''Carlos Haya'', Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

2010-07-15

12

A method for photon beam Monte Carlo multileaf collimator particle transport

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.

Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe

2002-09-01

13

A method for photon beam Monte Carlo multileaf collimator particle transport.

Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/- 1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/- 1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/- 1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry. PMID:12361220

Siebers, Jeffrey V; Keall, Paul J; Kim, Jong Oh; Mohan, Radhe

2002-09-01

14

Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)

NASA Astrophysics Data System (ADS)

Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.

Kum, Oyeon

2004-11-01

15

ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2004-06-01

16

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-01-01

17

An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport

Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...

Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

2015-01-01

18

NASA Astrophysics Data System (ADS)

Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.

Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun

2012-05-01

19

The Forschungszentrum Karlsruhe operates a partial body counter, which is designed for the in vivo measurement of low-energy photon emitters in the human body. Recently, a numerical procedure has been developed which allows for the calculation of individual calibration factors for this partial body counter. The procedure is based on a Monte Carlo simulation of the radiation transport from the contaminated organ or tissue within the body to the detectors using the MCNP5 code. For simulation of the human body, the MEET Man dataset of the Institute of Biomedical Techniques of the University Karlsruhe has been applied. The derived calibration factors were compared with the respective values measured using some physical phantoms such as the Lawrence Livermore National Laboratory torso phantom and the bone phantoms of the New York University Medical Center and the US Transuranium and Uranium Registry. PMID:17261536

Doerfel, H; Heide, B

2007-01-01

20

An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - is being developed at Rensselaer Polytechnic Institute as a software test bed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. In this paper, the preliminary results of code development and testing are presented. The electron transport in media was modeled using the class-II condensed history method. The electron energy considered ranges from a few hundred keV to 30 MeV. Moller scattering and bremsstrahlung processes above a preset energy were explicitly modeled. Energy loss below that threshold was accounted for using the Continuously Slowing Down Approximation (CSDA). Photon transport was dealt with using the delta tracking method. Photoelectric effect, Compton scattering and pair production were modeled. Voxelised geometry was supported. A serial ARHCHER-CPU was first written in C++. The code was then ported to the GPU platform using CUDA C. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla M2090 GPUs. ARHCHER was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and lateral dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x10{sup 6} histories of electrons were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively, on a CPU with a single core used. (authors)

Su, L.; Du, X.; Liu, T.; Xu, X. G. [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, NY 12180 (United States)

2013-07-01

21

The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to calculate radiation dose due to the neutron environment around a MEA is shown. An uncertainty of a factor of three in the MEA calculations is shown to be due to uncertainties in the geometry modeling. It is believed that the methodology is sound and that good agreement between simulation and experiment has been demonstrated.

Morgan C. White

2000-07-01

22

Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168

Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

2014-01-01

23

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2005-09-01

24

The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.

Kirk, B.L.

1985-12-01

25

THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

2007-01-10

26

Improved geometry representations for Monte Carlo radiation transport.

ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

Martin, Matthew Ryan (Cornell University)

2004-08-01

27

Treatment of Compton scattering of linearly polarized photons in Monte Carlo codes

NASA Astrophysics Data System (ADS)

The basic formalism of Compton scattering of linearly polarized photons is reviewed, and some simple prescriptions to deal with the transport of polarized photons in Monte Carlo simulation codes are given. Fortran routines, based on the described method, have been included in MCNP, a widely used code for neutrons, photons and electrons transport. As this improved version of the code can be of general use, the implementation and the procedures to employ the new version of the code are discussed.

Matt, Giorgio; Feroci, Marco; Rapisarda, Massimo; Costa, Enrico

1996-10-01

28

The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.

Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.

2000-03-01

29

Coupled electron-photon radiation transport

Massively-parallel computers allow detailed 3D radiation transport simulations to be performed to analyze the response of complex systems to radiation. This has been recently been demonstrated with the coupled electron-photon Monte Carlo code, ITS. To enable such calculations, the combinatorial geometry capability of ITS was improved. For greater geometrical flexibility, a version of ITS is under development that can track particles in CAD geometries. Deterministic radiation transport codes that utilize an unstructured spatial mesh are also being devised. For electron transport, the authors are investigating second-order forms of the transport equations which, when discretized, yield symmetric positive definite matrices. A novel parallelization strategy, simultaneously solving for spatial and angular unknowns, has been applied to the even- and odd-parity forms of the transport equation on a 2D unstructured spatial mesh. Another second-order form, the self-adjoint angular flux transport equation, also shows promise for electron transport.

Lorence, L.; Kensek, R.P.; Valdez, G.D.; Drumm, C.R.; Fan, W.C.; Powell, J.L.

2000-01-17

30

Recent advances in the Mercury Monte Carlo particle transport code

We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94551 (United States)

2013-07-01

31

Automated Monte Carlo biasing for photon-generated electrons near surfaces.

This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

2009-09-01

32

Photon transport in binary photonic lattices

NASA Astrophysics Data System (ADS)

We present a review of the mathematical methods that are used to theoretically study classical propagation and quantum transport in arrays of coupled photonic waveguides. We focus on analyzing two types of binary photonic lattices: those where either self-energies or couplings alternate. For didactic reasons, we split the analysis into classical propagation and quantum transport, but all methods can be implemented, mutatis mutandis, in a given case. On the classical side, we use coupled mode theory and present an operator approach to the Floquet-Bloch theory in order to study the propagation of a classical electromagnetic field in two particular infinite binary lattices. On the quantum side, we study the transport of photons in equivalent finite and infinite binary lattices by coupled mode theory and linear algebra methods involving orthogonal polynomials. Curiously, the dynamics of finite size binary lattices can be expressed as the roots and functions of Fibonacci polynomials.

Rodríguez-Lara, B. M.; Moya-Cessa, H.

2013-03-01

33

NOTE: An efficient framework for photon Monte Carlo treatment planning

NASA Astrophysics Data System (ADS)

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient. This work was presented in part at the First European Workshop on Monte Carlo Treatment Planning (EWG-MCTP) held in Gent, Belgium from 22 to 25 October 2006.

Fix, Michael K.; Manser, Peter; Frei, Daniel; Volken, Werner; Mini, Roberto; Born, Ernst J.

2007-09-01

34

Precise Monte Carlo Simulation of Single-Photon Detectors

We demonstrate the importance and utility of Monte Carlo simulation of single-photon detectors. Devising an optimal simulation is strongly influenced by the particular application because of the complexity of modern, avalanche-diode-based single-photon detectors.. Using a simple yet very demanding example of random number generation via detection of Poissonian photons exiting a beam splitter, we present a Monte Carlo simulation that faithfully reproduces the serial autocorrelation of random bits as a function of detection frequency over four orders of magnitude of the incident photon flux. We conjecture that this simulation approach can be easily modified for use in many other applications.

Mario Stip?evi?; Daniel J. Gauthier

2014-11-13

35

TART. Coupled Neutron & Photon MC Transport

TART is a three-dimensional, data-dependent Monte Carlo transport program. The program calculates the transport of neutrons, photons, and neutron-induced photons through zones described by algebraic functions. The zones and elements to be included are user-specified. Any one of 21 different output tallys (methods of calculating particle transport) may be selected for each zone. A spectral reflection tally, which calculates reflections from planes and quadratic surfaces, saves considerable time and effort for some classes of problems. The neutron and photon energy deposition output tally is included in all TART calculations. The neutron and gamma-ray production cross sections are specified from 10E-9 MeV to 20 MeV. The gamma-ray interaction cross sections are specified from 10E-4 MeV to 30 MeV. The three cross section libraries are provided in binary form. Variance reduction methods included are splitting and Russian roulette at zone boundaries. Each zone in the problem can be assigned a weight.

Plechaty, E.F. [Lawrence Livermore National Lab., CA (United States)

1988-10-06

36

Enhanced Electron-Photon Transport in MCNP6

NASA Astrophysics Data System (ADS)

Recently a variety of improved and enhanced methods for low-energy photon/electron transport have been developed for the Monte Carlo particle transport code MCNP6. Aspects of this development include a significant reworking of the MCNP coding to allow for consideration of much more detail in atomic relaxation processes, new algorithms for reading and processing the Evaluated-Nuclear-Data-File photon, electron, and relaxation data capable of supporting such detailed models, and extension of the electron/photon transport energy range below the traditional 1-kilovolt limit in MCNP, with the goal of performing transport of electrons and photons down to energies in the few-electron-volt range. In this paper we provide an overview of these developments.

Hughes, H. Grady

2014-06-01

37

A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

2012-08-20

38

PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

1999-07-01

39

MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.

Forster, R.A.; Little, R.C.; Briesmeister, J.F.

1989-01-01

40

Shield weight optimization using Monte Carlo transport calculations

NASA Technical Reports Server (NTRS)

Outlines are given of the theory used in FASTER-3 Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries. The code has the additional capability of calculating the minimum weight layered unit shield configuration which will meet a specified dose rate constraint. It includes the treatment of geometric regions bounded by quadratic and quardric surfaces with multiple radiation sources which have a specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. Results are presented for sample problems involving primary neutron and both primary and secondary photon transport in a spherical reactor shield configuration. These results include the optimization of the shield configuration.

Jordan, T. M.; Wohl, M. L.

1972-01-01

41

Monte Carlo simulation for the transport beamline

In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)] [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy)] [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)] [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)

2013-07-26

42

Monte Carlo radiation transport¶llelism

This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

Cox, L. J. (Lawrence J.); Post, S. E. (Susan E.)

2002-01-01

43

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

1999-02-09

44

Calculation of radiation therapy dose using all particle Monte Carlo transport

The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

Chandler, William P. (Tracy, CA); Hartmann-Siantar, Christine L. (San Ramon, CA); Rathkopf, James A. (Livermore, CA)

1999-01-01

45

Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

McKinley, M S; Brooks III, E D; Daffin, F

2004-12-13

46

Monte Carlo method for photon heating using temperature-dependent optical properties.

The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. PMID:25488656

Slade, Adam Broadbent; Aguilar, Guillermo

2015-02-01

47

Photon beam description in PEREGRINE for Monte Carlo dose calculations

Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

Cox, L. J., LLNL

1997-03-04

48

Vertical Photon Transport in Cloud Remote Sensing Problems

NASA Technical Reports Server (NTRS)

Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

Platnick, S.

1999-01-01

49

Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

NASA Astrophysics Data System (ADS)

The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear reactions for proton. Some other hadronic models are also being developed now. The benchmarking of proton transport in SuperMC has been performed according to Accelerator Driven subcritical System (ADS) benchmark data and model released by IAEA from IAEA's Cooperation Research Plan (CRP). The incident proton energy is 1.0 GeV. The neutron flux and energy deposition were calculated. The results simulated using SupeMC and FLUKA are in agreement within the statistical uncertainty inherent in the Monte Carlo method. The proton transport in SuperMC has also been applied in China Lead-Alloy cooled Reactor (CLEAR), which is designed by FDS Team for the calculation of spallation reaction in the target.

Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

2014-06-01

50

Approximation for Horizontal Photon Transport in Cloud Remote Sensing Problems

NASA Technical Reports Server (NTRS)

The effect of horizontal photon transport within real-world clouds can be of consequence to remote sensing problems based on plane-parallel cloud models. An analytic approximation for the root-mean-square horizontal displacement of reflected and transmitted photons relative to the incident cloud-top location is derived from random walk theory. The resulting formula is a function of the average number of photon scatterings, and particle asymmetry parameter and single scattering albedo. In turn, the average number of scatterings can be determined from efficient adding/doubling radiative transfer procedures. The approximation is applied to liquid water clouds for typical remote sensing solar spectral bands, involving both conservative and non-conservative scattering. Results compare well with Monte Carlo calculations. Though the emphasis is on horizontal photon transport in terrestrial clouds, the derived approximation is applicable to any multiple scattering plane-parallel radiative transfer problem. The complete horizontal transport probability distribution can be described with an analytic distribution specified by the root-mean-square and average displacement values. However, it is shown empirically that the average displacement can be reasonably inferred from the root-mean-square value. An estimate for the horizontal transport distribution can then be made from the root-mean-square photon displacement alone.

Plantnick, Steven

1999-01-01

51

This paper presents the findings of an investigation into the Monte Carlo simulation of superficial cancer treatments of an\\u000a internal canthus site using both kilovoltage photons and megavoltage electrons. The EGSnrc system of codes for the Monte Carlo\\u000a simulation of the transport of electrons and photons through a phantom representative of either a water phantom or treatment\\u000a site in a

B. E. Currie

2009-01-01

52

Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC

NASA Astrophysics Data System (ADS)

Several variance reduction techniques, such as photon splitting, electron history repetition, Russian roulette and the use of quasi-random numbers are investigated and shown to significantly improve the efficiency of the recently developed XVMC Monte Carlo code for photon beams in radiation therapy. It is demonstrated that it is possible to further improve the efficiency by optimizing transport parameters such as electron energy cut-off, maximum electron energy step size, photon energy cut-off and a cut-off for kerma approximation, without loss of calculation accuracy. These methods increase the efficiency by a factor of up to 10 compared with the initial XVMC ray-tracing technique or a factor of 50 to 80 compared with EGS4/PRESTA. Therefore, a common treatment plan (6 MV photons, 10×10 cm2 field size, 5 mm voxel resolution, 1% statistical uncertainty) can be calculated within 7 min using a single CPU 500 MHz personal computer. If the requirement on the statistical uncertainty is relaxed to 2%, the calculation time will be less than 2 min. In addition, a technique is presented which allows for the quantitative comparison of Monte Carlo calculated dose distributions and the separation of systematic and statistical errors. Employing this technique it is shown that XVMC calculations agree with EGSnrc on a sub-per cent level for simulations in the energy and material range of interest for radiation therapy.

Kawrakow, Iwan; Fippel, Matthias

2000-08-01

53

Neutron transport calculations using Quasi-Monte Carlo methods

This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.

Moskowitz, B.S.

1997-07-01

54

Transport of photons produced by lightning in clouds

NASA Technical Reports Server (NTRS)

The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.

Solakiewicz, Richard

1991-01-01

55

Thread Divergence and Photon Transport on the GPU (U). LA-UR-13-27057

NASA Astrophysics Data System (ADS)

Monte Carlo methods are commonly used to solve numerically the particle transport problems. A major disadvantage to Monte Carlo methods is the time required to obtain accurate solutions. Graphical Processing Units (GPUs) have increased in use as accelerators for improving performance in high-performance computing. Extracting the best performance from GPUs places requires careful consideration on code execution and data movement. In particular, performance can be reduced if threads diverge due to branching, and Monte Carlo codes are susceptible to branching penalties. We explore different schemes to reduce thread divergence in photonics transport and report on our performance findings.

Aulwes, Rob T.; Zukaitis, Anthony

2014-06-01

56

Photonic sensor applications in transportation security

NASA Astrophysics Data System (ADS)

There is a broad range of security sensing applications in transportation that can be facilitated by using fiber optic sensors and photonic sensor integrated wireless systems. Many of these vital assets are under constant threat of being attacked. It is important to realize that the threats are not just from terrorism but an aging and often neglected infrastructure. To specifically address transportation security, photonic sensors fall into two categories: fixed point monitoring and mobile tracking. In fixed point monitoring, the sensors monitor bridge and tunnel structural health and environment problems such as toxic gases in a tunnel. Mobile tracking sensors are being designed to track cargo such as shipboard cargo containers and trucks. Mobile tracking sensor systems have multifunctional sensor requirements including intrusion (tampering), biochemical, radiation and explosives detection. This paper will review the state of the art of photonic sensor technologies and their ability to meet the challenges of transportation security.

Krohn, David A.

2007-09-01

57

ITS: the integrated TIGER series of electron\\/photon transport codes-Version 3.0

The ITS system is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron\\/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Version 3.0 is a major upgrade of the system with important improvements in the physical model. Improvements in the Monte Carlo codes

John A. Halbleib; Ronald P. Kensek; Greg D. Valdez; Stephen M. Seltzer; Martin J. Berger

1992-01-01

58

A generic algorithm for Monte Carlo simulation of proton transport

NASA Astrophysics Data System (ADS)

A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.

Salvat, Francesc

2013-12-01

59

A generalized form of coupled photon transport equations that can handle correlated light beams with distinct frequencies is introduced. The derivation is based on the principle of energy conservation. For a single frequency, the current formulation reduces to a standard photon transport equation, and for fluorescence and phosphorescence, the diffusion models derived from the proposed photon transport model match for homogenous media. The generalized photon transport model is extended to handle wideband inputs in the frequency domain. PMID:23381285

Handapangoda, Chintha C; Premaratne, Malin; Nahavandi, Saeid

2012-08-15

60

Purpose: This paper presents the results of a series of calculations to determine buildup factors for ordinary concrete, baryte concrete, lead, steel, and iron in broad beam geometry for photons energies from 0.125 to 25.125 MeV at 0.250 MeV intervals.Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials.Results: The computation of the primary broad beams using buildup factors data was done for nine published megavoltage photon beam spectra ranging from 4 to 25 MV in nominal energies, representing linacs made by the three major manufacturers. The first tenth value layer and the equilibrium tenth value layer are calculated from the broad beam transmission for these nine primary megavoltage photon beam spectra.Conclusions: The results, compared with published data, show the ability of these buildup factor data to predict shielding transmission curves for the primary radiation beam. Therefore, the buildup factor data can be combined with primary, scatter, and leakage x-ray spectra to perform computation of broad beam transmission for barriers in radiotherapy shielding x-ray facilities.

Karim Karoui, Mohamed [Faculte des Sciences de Monastir, Avenue de l'environnement 5019 Monastir -Tunisia (Tunisia); Kharrati, Hedi [Ecole Superieure des Sciences et Techniques de la Sante de Monastir, Avenue Avicenne 5000 Monastir (Tunisia)

2013-07-15

61

MORSE Monte Carlo radiation transport code system

This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

Emmett, M.B.

1983-02-01

62

Perturbation Monte Carlo methods to solve inverse photon migration problems in heterogeneous tissues

We introduce a novel and efficient method to provide solutions to inverse photon migration problems in hetero- geneous turbid media. The method extracts derivative information from a single Monte Carlo simulation to permit the rapid determination of rates of change in the detected photon signal with respect to perturbations in background tissue optical properties. We then feed this derivative information

Carole K. Hayakawa; Jerome Spanier; Frédéric Bevilacqua; Andrew K. Dunn; Joon S. You; Bruce J. Tromberg; Vasan Venugopalan

2001-01-01

63

Efficient, Automated Monte Carlo Methods for Radiation Transport

Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

Kong, Rong; Ambrose, Martin; Spanier, Jerome

2012-01-01

64

Monte-Carlo Study of Axonal Transport in a Neuron

NASA Astrophysics Data System (ADS)

A living cell has an infrastructure much like that of a city. A key component is the transportation system that consists of roads (filaments) and molecular motors (proteins) that haul cargo along these roads. We will present a Monte Carlo simulation of intracellular transport inside an axon in which motor proteins carry cargos along microtubules and are able to switch from one microtubule to another. The breakdown of intracellular transport in neurons has been associated with neurodegenerative diseases such as Alzheimer's, Lou Gehig's disease (ALS), and Huntingdon's disease.

Shrestha, Uttam; Yu, Clare; Jia, Zhiyuan; Erickson, Robert; Gross, Steven

2011-03-01

65

Specific Absorbed Fractions of Electrons and Photons for Rad-HUMAN Phantom Using Monte Carlo Method

The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photon and electron were calculated using the Rad-HUMAN phantom, a computational voxel phantom of Chinese adult female and created using the color photographic image of the Chinese Visible Human (CVH) data set. The model can represent most of Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10keV to 4MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from Rad-HUMAN have the similar trends but larger than those from the other two models. The differences were due to the racial and anatomical differences in o...

Wang, Wen; Long, Peng-cheng; Hu, Li-qin

2014-01-01

66

Neutron streaming Monte Carlo radiation transport code MORSE-CG

Calculations have been performed using the Monte Carlo code, MORSE-CG, to determine the neutron streaming through various straight and stepped gaps between radiation shield sectors in the conceptual tokamak fusion power plant design STARFIRE. This design calls for ''pie-shaped'' radiation shields with gaps between segments. It is apparent that some type of offset, or stepped gap, configuration will be necessary to reduce neutron streaming through these gaps. To evaluate this streaming problem, a MORSE-to-MORSE coupling technique was used, consisting of two separate transport calculations, which together defined the entire transport problem. The results define the effectiveness of various gap configurations to eliminate radiation streaming.

Halley, A.M.; Miller, W.H.

1986-11-01

67

Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

NASA Astrophysics Data System (ADS)

Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes.

Pratx, Guillem; Xing, Lei

2011-12-01

68

Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce.

Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

Pratx, Guillem; Xing, Lei

2011-12-01

69

Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

Pratx, Guillem; Xing, Lei

2011-01-01

70

The Implementation of Photon Polarization into the Mercury Transport Code

Polarization effects have been ignored in most photon transport codes to date, but new technology has created a need for portable, massively parallel, versatile transport codes that include the effects of polarization. In this project, the effects...

Windsor, Ethan

2014-06-04

71

Engineering nanoscale phonon and photon transport for direct energy conversion

Nanostructures have a profound impact on the transport of heat and energy by electrons, phonons, and photons. In this paper, we will discuss some of the nanoscale heat transfer effects on phonon and photon transport and their implications for thermoelectric and thermophotovoltaic energy conversion technologies. For example, low thermal conductivity materials with good electrical properties are required in solid-state refrigerators

G. Chen; A. Narayanaswamy; C. Dames

2004-01-01

72

Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there ...

Romano, Paul K. (Paul Kollath)

2013-01-01

73

Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems

NASA Astrophysics Data System (ADS)

This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It was found that for the current implementation of MCSA, both weak and strong scaling improved on that observed for production implementations of Krylov methods.

Slattery, Stuart R.

74

Current status of the PSG Monte Carlo neutron transport code

PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

Leppaenen, J. [VTT Technical Research Centre of Finland, Laempoemiehenkuja 3, Espoo, FI-02044 VTT (Finland)

2006-07-01

75

Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy

Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm{sup 2} fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two-dimensional dose comparisons, the differences between calculations and measurements are generally within 2% of the maximal dose value or 2 mm DTA. Conclusions : The results of the dose comparisons suggest that the developed beam model is suitable to accurately reconstruct photon MLC shaped electron beams for a Clinac 23EX and a TrueBeam linac. Hence, in future work the beam model will be utilized to investigate the possibilities of MERT using the photon MLC to shape electron beams.

Henzen, D., E-mail: henzen@ams.unibe.ch; Manser, P.; Frei, D.; Volken, W.; Born, E. J.; Vetterli, D.; Chatelain, C.; Fix, M. K. [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland)] [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, CH-3010 Berne (Switzerland); Neuenschwander, H. [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland)] [Clinic for Radiation-Oncology, Lindenhofspital Bern, CH-3012 Berne (Switzerland); Stampanoni, M. F. M. [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)] [Institute for Biomedical Engineering, ETH Zürich and Paul Scherrer Institut, CH-5234 Villigen (Switzerland)

2014-02-15

76

Characterization of a novel micro-irradiator using Monte Carlo radiation transport simulations

NASA Astrophysics Data System (ADS)

Small animals are highly valuable resources for radiobiology research. While rodents have been widely used for decades, zebrafish embryos have recently become a very popular research model. However, unlike rodents, zebrafish embryos lack appropriate irradiation tools and methodologies. Therefore, the main purpose of this work is to use Monte Carlo radiation transport simulations to characterize dosimetric parameters, determine dosimetric sensitivity and help with the design of a new micro-irradiator capable of delivering irradiation fields as small as 1.0 mm in diameter. The system is based on a miniature x-ray source enclosed in a brass collimator with 3 cm diameter and 3 cm length. A pinhole of 1.0 mm diameter along the central axis of the collimator is used to produce a narrow photon beam. The MCNP5, Monte Carlo code, is used to study the beam energy spectrum, percentage depth dose curves, penumbra and effective field size, dose rate and radiation levels at 50 cm from the source. The results obtained from Monte Carlo simulations show that a beam produced by the miniature x-ray and the collimator system is adequate to totally or partially irradiate zebrafish embryos, cell cultures and other small specimens used in radiobiology research.

Rodriguez, Manuel; Jeraj, Robert

2008-06-01

77

NASA Astrophysics Data System (ADS)

Purpose: The purpose of this study is to evaluate the influence of photon propagation on the NIR spectral features associated with photoacoustic imaging. Introduction: Photoacoustic CT spectroscopy (PCT-S) has the potential to identify molecular properties of tumors while overcoming the limited depth resolution associated with optical imaging modalities (e.g., OCT and DOT). Photoacoustics is based on the fact that biological tissue generates high-frequency acoustic signals due to volume of expansion when irradiated by pulsed light. The amplitude of the acoustic signal is proportional to the optical absorption properties of tissue, which varies with wavelength depending on the molecular makeup of the tissue. To obtain quantifiable information necessitate modeling and correcting for photon and acoustic propagation in tumors. Material and Methods: A Monte Carlo (MC) algorithm based on MCML (Monte Carlo for Multi-Layered edia) has been developed to simulate photon propagation within objects comprised of a series of complex 3D surfaces (Mcml3D). This code has been used to simulate and correct for the optical attenuation of photons in blood, and for subcutaneous tumors with homogenous and radially heterogeneous vascular distributions. Results: The NIR spectra for oxygenated and deoxygenated blood as determined from Monte Carlo simulated photoacoustic data matched measured data, and improving oxygen saturation calculations. Subcutaneous tumors with a homogeneous and radially heterogeneous distribution of blood revealed large variations in photon absorption as a function of the scanner projection angle. For select voxels near the periphery of the tumor, this angular profile between the two different tumors appeared similar. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in blood phantoms and restoring the integrity of the NIR spectra associated with oxygenated and deoxygenated blood. This code can be used to simulate the influence of intra-tumor heterogeneity on the molecular identification via NIR spectroscopy.

Stantz, Keith M.; Liu, Bo; Kruger, Robert A.

2007-02-01

78

Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

NASA Astrophysics Data System (ADS)

We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.

Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.

2013-12-01

79

Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors

Monte Carlo simulation of electron transport in degenerate and inhomogeneous semiconductors Mona exclusion principle in Monte Carlo simulations. This algorithm has significant advantages to implement the scattering rate. The ensemble Monte Carlo MC simulation is accepted as a powerful numerical technique

80

A deterministic computational model for the two dimensional electron and photon transport

NASA Astrophysics Data System (ADS)

A deterministic (non-statistical) two dimensional (2D) computational model describing the transport of electron and photon typical of space radiation environment in various shield media is described. The 2D formalism is casted into a code which is an extension of a previously developed one dimensional (1D) deterministic electron and photon transport code. The goal of both 1D and 2D codes is to satisfy engineering design applications (i.e. rapid analysis) while maintaining an accurate physics based representation of electron and photon transport in space environment. Both 1D and 2D transport codes have utilized established theoretical representations to describe the relevant collisional and radiative interactions and transport processes. In the 2D version, the shield material specifications are made more general as having the pertinent cross sections. In the 2D model, the specification of the computational field is in terms of a distance of traverse z along an axial direction as well as a variable distribution of deflection (i.e. polar) angles ? where -?/2transport formalism, a combined mean-free-path and average trajectory approach is used. For candidate shielding materials, using the trapped electron radiation environments at low Earth orbit (LEO), geosynchronous orbit (GEO) and Jupiter moon Europa, verification of the 2D formalism vs. 1D and an existing Monte Carlo code are presented.

Badavi, Francis F.; Nealy, John E.

2014-12-01

81

Robust light transport in non-Hermitian photonic lattices

Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode...

Longhi, Stefano; Della Valle, Giuseppe

2015-01-01

82

Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

Smith, L.M.; Hochstedler, R.D. [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering] [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering

1997-02-01

83

Acceleration of a Monte Carlo radiation transport code

Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

Hochstedler, R.D.; Smith, L.M. [The University of Tennessee Space Institute, B. H. Goethert Parkway, MS 21, Tullahoma, Tennessee 37388-8897 (United States)

1996-03-01

84

A Monte Carlo study on neutron and electron contamination of an unflattened 18-MV photon beam.

Recent studies on flattening filter (FF) free beams have shown increased dose rate and less out-of-field dose for unflattened photon beams. On the other hand, changes in contamination electrons and neutron spectra produced through photon (E>10 MV) interactions with linac components have not been completely studied for FF free beams. The objective of this study was to investigate the effect of removing FF on contamination electron and neutron spectra for an 18-MV photon beam using Monte Carlo (MC) method. The 18-MV photon beam of Elekta SL-25 linac was simulated using MCNPX MC code. The photon, electron and neutron spectra at a distance of 100 cm from target and on the central axis of beam were scored for 10 x 10 and 30 x 30 cm(2) fields. Our results showed increase in contamination electron fluence (normalized to photon fluence) up to 1.6 times for FF free beam, which causes more skin dose for patients. Neuron fluence reduction of 54% was observed for unflattened beams. Our study confirmed the previous measurement results, which showed neutron dose reduction for unflattened beams. This feature can lead to less neutron dose for patients treated with unflattened high-energy photon beams. PMID:18760613

Mesbahi, Asghar

2009-01-01

85

Parallelization of a Monte Carlo particle transport simulation code

NASA Astrophysics Data System (ADS)

We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

2010-05-01

86

Robust light transport in non-Hermitian photonic lattices

Combating the effects of disorder on light transport in micro- and nano-integrated photonic devices is of major importance from both fundamental and applied viewpoints. In ordinary waveguides, imperfections and disorder cause unwanted back-reflections, which hinder large-scale optical integration. Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, can realize robust transport via topologically-protected unidirectional edge modes. Such waveguides are realized by the introduction of synthetic gauge fields for photons in a two-dimensional structure, which break time reversal symmetry and enable one-way guiding at the edge of the medium. Here we suggest a different route toward robust transport of light in lower-dimensional (1D) photonic lattices, in which time reversal symmetry is broken because of the non-Hermitian nature of transport. While a forward propagating mode in the lattice is amplified, the corresponding backward propagating mode is damped, thus resulting in an asymmetric transport that is rather insensitive to disorder in the structure. The idea of non-Hermitian robust transport is exemplified in the simplest case of an 'imaginary' gauge field for photons using an engineered coupled-resonator optical waveguide (CROW) structure.

Stefano Longhi; Davide Gatti; Giuseppe Della Valle

2015-03-30

87

Phonon transport analysis of semiconductor nanocomposites using monte carlo simulations

NASA Astrophysics Data System (ADS)

Nanocomposites are composite materials which incorporate nanosized particles, platelets or fibers. The addition of nanosized phases into the bulk matrix can lead to significantly different material properties compared to their macrocomposite counterparts. For nanocomposites, thermal conductivity is one of the most important physical properties. Manipulation and control of thermal conductivity in nanocomposites have impacted a variety of applications. In particular, it has been shown that the phonon thermal conductivity can be reduced significantly in nanocomposites due to the increase in phonon interface scattering while the electrical conductivity can be maintained. This extraordinary property of nanocomposites has been used to enhance the energy conversion efficiency of the thermoelectric devices which is proportional to the ratio of electrical to thermal conductivity. This thesis investigates phonon transport and thermal conductivity in Si/Ge semiconductor nanocomposites through numerical analysis. The Boltzmann transport equation (BTE) is adopted for description of phonon thermal transport in the nanocomposites. The BTE employs the particle-like nature of phonons to model heat transfer which accounts for both ballistic and diffusive transport phenomenon. Due to the implementation complexity and computational cost involved, the phonon BTE is difficult to solve in its most generic form. Gray media (frequency independent phonons) is often assumed in the numerical solution of BTE using conventional methods such as finite volume and discrete ordinates methods. This thesis solves the BTE using Monte Carlo (MC) simulation technique which is more convenient and efficient when non-gray media (frequency dependent phonons) is considered. In the MC simulation, phonons are displaced inside the computational domain under the various boundary conditions and scattering effects. In this work, under the relaxation time approximation, thermal transport in the nanocomposites are computed by using both gray media and non-gray media approaches. The non-gray media simulations take into consideration the dispersion and polarization effects of phonon transport. The effects of volume fraction, size, shape and distribution of the nanowire fillers on heat flow and hence thermal conductivity are studied. In addition, the computational performances of the gray and non-gray media approaches are compared.

Malladi, Mayank

88

Topologically Robust Transport of Photons in a Synthetic Gauge Field

Electronic transport in low dimensions through a disordered medium leads to localization. The addition of gauge fields to disordered media leads to fundamental changes in the transport properties. For example, chiral edge states can emerge in two-dimensional systems with a perpendicular magnetic field. Here, we implement a "synthetic'' gauge field for photons using silicon-on-insulator technology. By determining the distribution of transport properties, we confirm the localized transport in the bulk and the suppression of localization in edge states, using the "gold standard'' for localization studies. Our system provides a new platform to investigate transport properties in the presence of synthetic gauge fields, which is important both from the fundamental perspective of studying photonic transport and for applications in classical and quantum information processing.

S. Mittal; J. Fan; S. Faez; A. Migdall; J. M. Taylor; M. Hafezi

2014-03-31

89

Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling

Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1]. MCNP5 is used to calculate sensor pulse-height tallies. RADSAT methods, including adaptive, problem-specific energy-group creation, ray-effect mitigation strategies and the porting of deterministic angular flux to MCNP for individual particle creation are described in [2][3][4]. This paper discusses the application of RADSAT to the modeling of gamma-ray spectrometers in RPMs.

Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.

2008-01-14

90

Status of the MORSE multigroup Monte Carlo radiation transport code

There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.

Emmett, M.B.

1993-06-01

91

A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport

Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.

Wollaeger, Ryan T. [Los Alamos National Laboratory; Densmore, Jeffery D. [Los Alamos National Laboratory

2012-06-19

92

Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SPn), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

2010-01-01

93

Monte Carlo Studies of Electron Transport In Semiconductor Nanostructures

NASA Astrophysics Data System (ADS)

An Ensemble Monte Carlo (EMC) computer code has been developed to simulate, semi-classically, spin-dependent electron transport in quasi two-dimensional (2D) III-V semiconductors. The code accounts for both three-dimensional (3D) and quasi-2D transport, utilizing either 3D or 2D scattering mechanisms, as appropriate. Phonon, alloy, interface roughness, and impurity scattering mechanisms are included, accounting for the Pauli Exclusion Principle via a rejection algorithm. The 2D carrier states are calculated via a self-consistent 1D Schrodinger-3D-Poisson solution in which the charge distribution of the 2D carriers in the quantization direction is taken as the spatial distribution of the squared envelope functions within the Hartree approximation. The wavefunctions, subband energies, and 2D scattering rates are updated periodically by solving a series of 1D Schrodinger wave equations (SWE) over the real-space domain of the device at fixed time intervals. The electrostatic potential is updated by periodically solving the 3D Poisson equation. Spin-polarized transport is modeled via a spin density-matrix formalism that accounts for D'yakanov-Perel (DP) scattering. Also, the code allows for the easy inclusion of additional scattering mechanisms and structural modifications to devices. As an application of the simulator, the current voltage characteristics of an InGaAs/InAlAs HEMT are simulated, corresponding to nanoscale III-V HEMTs currently being fabricated by Intel Corporation. The comparative effects of various scattering parameters, material properties and structural attributes are investigated and compared with experiments where reasonable agreement is obtained. The spatial evolution of spin-polarized carriers in prototypical Spin Field Effect Transistor (SpinFET) devices is then simulated. Studies of the spin coherence times in quasi-2D structures is first investigated and compared to experimental results. It is found that the simulated spin coherence times for GaAs structures are in reasonable agreement with experiment. The SpinFET structure studied is a scaled-down version of the InGaAs/InAlAs HEMT discussed in this work, in which spin-polarized carriers are injected at the source, and the coherence length is studied as a function of gate voltage via the Rashba effect.

Tierney, Brian David

94

An automated variance reduction method for global Monte Carlo neutral particle transport problems

NASA Astrophysics Data System (ADS)

A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

Cooper, Marc Andrew

95

NASA Astrophysics Data System (ADS)

Geant4 Monte Carlo code simulations were used to solve experimental and theoretical complications for calculation of mass energy-absorption coefficients of elements, air, and compounds. The mass energy-absorption coefficients for nuclear track detectors were computed first time using Geant4 Monte Carlo code for energy 1 keV-20 MeV. Very good agreements for simulated results of mass energy-absorption coefficients for carbon, nitrogen, silicon, sodium iodide and nuclear track detectors were observed on comparison with the values reported in the literatures. Kerma relative to air for energy 1 keV-20 MeV and energy absorption buildup factors for energy 50 keV-10 MeV up to 10 mfp penetration depths of the selected nuclear track detectors were also calculated to evaluate the absorption of the gamma photons. Geant4 simulation can be utilized for estimation of mass energy-absorption coefficients in elements and composite materials.

Singh, Vishwanath P.; Medhat, M. E.; Badiger, N. M.

2015-01-01

96

Memory Bottlenecks and Memory Contention in Multi-Core Monte Carlo Transport Codes

NASA Astrophysics Data System (ADS)

We have extracted a kernel that executes only the most computationally expensive steps of the Monte Carlo particle transport algorithm - the calculation of macroscopic cross sections - in an effort to expose bottlenecks within multi-core, shared memory architectures.

Tramm, John R.; Siegel, Andrew R.

2014-06-01

97

Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector Radiative Transport Equation (vRTE). Monte Carlo integration of the vRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries and because it can intuitively incorporate elaborate physics. Aims. We present a novel Pre-Conditioned Backward Monte Carlo (PBMC) algorithm for solving the vRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods. We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly-polarising media such as Rayleigh atmospheres. The numerical difficulty is avoid...

Muñoz, García; Mills,; P, F

2014-01-01

98

Variance and efficiency in Monte Carlo transport calculations

NASA Astrophysics Data System (ADS)

Recent developments in Monte Carlo variance and efficiency analysis are summarized. Sufficient conditions are given under which the variance of a Monte Carlo game is less than that of another. The efficiencies of the ELP method and a game with survival biasing and Russian roulette are treated.

Lux, Iván

1980-09-01

99

THE THEORETICAL DEVELOPMENT OF A NEW HIGH SPEED SOLUTION FOR MONTE CARLO RADIATION TRANSPORT COMPUTATIONS A Thesis by ALEXANDER SAMUEL PASCIAK Submitted to the Office of Graduate Studies of Texas A... CARLO RADIATION TRANSPORT COMPUTATIONS A Thesis by ALEXANDER SAMUEL PASCIAK Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE...

Pasciak, Alexander Samuel

2007-04-25

100

NASA Technical Reports Server (NTRS)

A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

Jordan, T. M.

1970-01-01

101

Photon transport enhanced by transverse Anderson localization in disordered superlattices

NASA Astrophysics Data System (ADS)

Controlling the flow of light at subwavelength scales provides access to functionalities such as negative or zero index of refraction, transformation optics, cloaking, metamaterials and slow light, but diffraction effects severely restrict our ability to control light on such scales. Here we report the photon transport and collimation enhanced by transverse Anderson localization in chip-scale dispersion-engineered anisotropic media. We demonstrate a photonic crystal superlattice structure in which diffraction is nearly completely arrested by cascaded resonant tunnelling through transverse guided resonances. By modifying the geometry of more than 4,000 scatterers in the superlattices we add structural disorder controllably and uncover the mechanism of disorder-induced transverse localization. Arrested spatial divergence is captured in the power-law scaling, along with exponential asymmetric mode profiles and enhanced collimation bandwidths for increasing disorder. With increasing disorder, we observe the crossover from cascaded guided resonances into the transverse localization regime, beyond both the ballistic and diffusive transport of photons.

Hsieh, P.; Chung, C.; McMillan, J. F.; Tsai, M.; Lu, M.; Panoiu, N. C.; Wong, C. W.

2015-03-01

102

FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation

The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.

Hackel, B M; Nielsen Jr., D E; Procassini, R J

2009-02-25

103

Monte Carlo-based energy response studies of diode dosimeters in radiotherapy photon beams.

This study presents Monte Carlo-calculated absolute and normalized (relative to a (60)Co beam) sensitivity values of silicon diode dosimeters for a variety of commercially available silicon diode dosimeters for radiotherapy photon beams in the energy range of (60)Co-24 MV. These values were obtained at 5 cm depth along the central axis of a water-equivalent phantom of 10 cm × 10 cm field size. The Monte Carlo calculations were based on the EGSnrc code system. The diode dosimeters considered in the calculations have different buildup materials such as aluminum, brass, copper, and stainless steel + epoxy. The calculated normalized sensitivity values of the diode dosimeters were then compared to previously published measured values for photon beams at (60)Co-20 MV. The comparison showed reasonable agreement for some diode dosimeters and deviations of 5-17 % (17 % for the 3.4 mm brass buildup case for a 10 MV beam) for some diode dosimeters. Larger deviations of the measurements reflect that these models of the diode dosimeter were too simple. The effect of wall materials on the absorbed dose to the diode was studied and the results are presented. Spencer-Attix and Bragg-Gray stopping power ratios (SPRs) of water-to-diode were calculated at 5 cm depth in water. The Bragg-Gray SPRs of water-to-diode compare well with Spencer-Attix SPRs for ? = 100 keV and above at all beam qualities. PMID:23180010

Arun, C; Palani Selvam, T; Dinkar, Verma; Munshi, Prabhat; Kalra, Manjit Singh

2013-01-01

104

Monte Carlo photon beam modeling and commissioning for radiotherapy dose calculation algorithm.

The aim of the present work was a Monte Carlo verification of the Multi-grid superposition (MGS) dose calculation algorithm implemented in the CMS XiO (Elekta) treatment planning system and used to calculate the dose distribution produced by photon beams generated by the linear accelerator (linac) Siemens Primus. The BEAMnrc/DOSXYZnrc (EGSnrc package) Monte Carlo model of the linac head was used as a benchmark. In the first part of the work, the BEAMnrc was used for the commissioning of a 6 MV photon beam and to optimize the linac description to fit the experimental data. In the second part, the MGS dose distributions were compared with DOSXYZnrc using relative dose error comparison and ?-index analysis (2%/2 mm, 3%/3 mm), in different dosimetric test cases. Results show good agreement between simulated and calculated dose in homogeneous media for square and rectangular symmetric fields. The ?-index analysis confirmed that for most cases the MGS model and EGSnrc doses are within 3% or 3 mm. PMID:24947967

Toutaoui, A; Ait chikh, S; Khelassi-Toutaoui, N; Hattali, B

2014-11-01

105

We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging. PMID:19997242

Boas, David A.

2010-01-01

106

We examine the relative error of Monte Carlo simulations of radiative transport that employ two commonly used estimators that account for absorption differently, either discretely, at interaction points, or continuously, between interaction points. We provide a rigorous derivation of these discrete and continuous absorption weighting estimators within a stochastic model that we show to be equivalent to an analytic model, based on the radiative transport equation (RTE). We establish that both absorption weighting estimators are unbiased and, therefore, converge to the solution of the RTE. An analysis of spatially resolved reflectance predictions provided by these two estimators reveals no advantage to either in cases of highly scattering and highly anisotropic media. However, for moderate to highly absorbing media or isotropically scattering media, the discrete estimator provides smaller errors at proximal source locations while the continuous estimator provides smaller errors at distal locations. The origin of these differing variance characteristics can be understood through examination of the distribution of exiting photon weights. PMID:24562029

Hayakawa, Carole K.; Spanier, Jerome; Venugopalan, Vasan

2014-01-01

107

Input-Output Formalism for Few-Photon Transport: A Systematic Treatment Beyond Two Photons

We provide a systematic treatment of $N$-photon transport in a waveguide coupled to a local system, using the input-output formalism. The main result of the paper is a general connection between the $N$-photon S matrix and the Green functions of the local system. We also show that the computation can be significantly simplified, by exploiting the connectedness structure of both the S matrix and the Green function, and by computing the Green function using an effective Hamiltonian that involves only the degrees of freedom of the local system. We illustrate our formalism by computing $N$-photon transport through a cavity containing a medium with Kerr nonlinearity, with $N$ up to 3.

Shanshan Xu; Shanhui Fan

2015-02-21

108

SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

NASA Astrophysics Data System (ADS)

Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

2014-03-01

109

NASA Astrophysics Data System (ADS)

A versatile computer program MORSE, based on neutron and photon transport theory has been utilized to investigate radiation therapy treatment planning quantities and techniques. A multi-energy group representation of transport equation provides a concise approach in utilizing Monte Carlo numerical techniques to multiple radiation therapy treatment planning problems. A general three dimensional geometry is used to simulate radiation therapy treatment planning problems in configurations of an actual clinical setting. Central axis total and scattered dose distributions for homogeneous and inhomogeneous water phantoms are calculated and the correction factor for lung and bone inhomogeneities are also evaluated. Results show that Monte Carlo calculations based on multi-energy group transport theory predict the depth dose distributions that are in good agreement with available experimental data. Improved correction factors based on the concepts of lung-air-ratio and bone-air-ratio are proposed in lieu of the presently used correction factors that are based on tissue-air-ratio power law method for inhomogeneity corrections. Central axis depth dose distributions for a bremsstrahlung spectrum from a linear accelerator is also calculated to exhibit the versatility of the computer program in handling multiple radiation therapy problems. A novel approach is undertaken to study the dosimetric properties of brachytherapy sources. Dose rate constants for various radionuclides are calculated from the numerically generated dose rate versus source energy curves. Dose rates can also be generated for any point brachytherapy source with any arbitrary energy spectrum at various radial distances from this family of curves.

Palta, Jatinder Raj

110

Detailed calculation of inner-shell impact ionization to use in photon transport codes

NASA Astrophysics Data System (ADS)

Secondary electrons can modify the intensity of the XRF characteristic lines by means of a mechanism known as inner-shell impact ionization (ISII). The ad-hoc code KERNEL (which calls the PENELOPE package) has been used to characterize the electron correction in terms of angular, spatial and energy distributions. It is demonstrated that the angular distribution of the characteristic photons due to ISII can be safely considered as isotropic, and that the source of photons from electron interactions is well represented as a point source. The energy dependence of the correction is described using an analytical model in the energy range 1-150 keV, for all the emission lines (K, L and M) of the elements with atomic numbers Z=11-92. It is introduced a new photon kernel comprising the correction due to ISII, suitable to be adopted in photon transport codes (deterministic or Monte Carlo) with a minimal effort. The impact of the correction is discussed for the most intense K (K?1,K?2,K?1) and L (L?1,L?2) lines.

Fernandez, Jorge E.; Scot, Viviana; Verardi, Luca; Salvat, Francesc

2014-02-01

111

Purpose: The goal of this work is to compare D{sub m,m} (radiation transported in medium; dose scored in medium) and D{sub w,m} (radiation transported in medium; dose scored in water) obtained from Monte Carlo (MC) simulations for a subset of human tissues of interest in low energy photon brachytherapy. Using low dose rate seeds and an electronic brachytherapy source (EBS), the authors quantify the large cavity theory conversion factors required. The authors also assess whether applying large cavity theory utilizing the sources' initial photon spectra and average photon energy induces errors related to spatial spectral variations. First, ideal spherical geometries were investigated, followed by clinical brachytherapy LDR seed implants for breast and prostate cancer patients. Methods: Two types of dose calculations are performed with the GEANT4 MC code. (1) For several human tissues, dose profiles are obtained in spherical geometries centered on four types of low energy brachytherapy sources: {sup 125}I, {sup 103}Pd, and {sup 131}Cs seeds, as well as an EBS operating at 50 kV. Ratios of D{sub w,m} over D{sub m,m} are evaluated in the 0-6 cm range. In addition to mean tissue composition, compositions corresponding to one standard deviation from the mean are also studied. (2) Four clinical breast (using {sup 103}Pd) and prostate (using {sup 125}I) brachytherapy seed implants are considered. MC dose calculations are performed based on postimplant CT scans using prostate and breast tissue compositions. PTV D{sub 90} values are compared for D{sub w,m} and D{sub m,m}. Results: (1) Differences (D{sub w,m}/D{sub m,m}-1) of -3% to 70% are observed for the investigated tissues. For a given tissue, D{sub w,m}/D{sub m,m} is similar for all sources within 4% and does not vary more than 2% with distance due to very moderate spectral shifts. Variations of tissue composition about the assumed mean composition influence the conversion factors up to 38%. (2) The ratio of D{sub 90(w,m)} over D{sub 90(m,m)} for clinical implants matches D{sub w,m}/D{sub m,m} at 1 cm from the single point sources. Conclusions: Given the small variation with distance, using conversion factors based on the emitted photon spectrum (or its mean energy) of a given source introduces minimal error. The large differences observed between scoring schemes underline the need for guidelines on choice of media for dose reporting. Providing such guidelines is beyond the scope of this work.

Landry, Guillaume; Reniers, Brigitte; Pignol, Jean-Philippe; Beaulieu, Luc; Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario M4N 3M5 (Canada); Departement de Radio-Oncologie et Centre de Recherche en Cancerologie, Universite Laval, CHUQ Pavillon L'Hotel-Dieu de Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d'Optique, Universite Laval, Quebec G1K 7P4 (Canada); Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands) and Department of Oncology, McGill University, Montreal General Hospital, Montreal, Quebec H3G 1A4 (Canada)

2011-03-15

112

The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C(1) and C(2) and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population. PMID:18723929

Nettelbeck, H; Takacs, G J; Rosenfeld, A B

2008-09-21

113

NASA Astrophysics Data System (ADS)

The application of a strong transverse magnetic field to a volume undergoing irradiation by a photon beam can produce localized regions of dose enhancement and dose reduction. This study uses the PENELOPE Monte Carlo code to investigate the effect of a slice of uniform transverse magnetic field on a photon beam using different magnetic field strengths and photon beam energies. The maximum and minimum dose yields obtained in the regions of dose enhancement and dose reduction are compared to those obtained with the EGS4 Monte Carlo code in a study by Li et al (2001), who investigated the effect of a slice of uniform transverse magnetic field (1 to 20 Tesla) applied to high-energy photon beams. PENELOPE simulations yielded maximum dose enhancements and dose reductions as much as 111% and 77%, respectively, where most results were within 6% of the EGS4 result. Further PENELOPE simulations were performed with the Sheikh-Bagheri and Rogers (2002) input spectra for 6, 10 and 15 MV photon beams, yielding results within 4% of those obtained with the Mohan et al (1985) spectra. Small discrepancies between a few of the EGS4 and PENELOPE results prompted an investigation into the influence of the PENELOPE elastic scattering parameters C1 and C2 and low-energy electron and photon transport cut-offs. Repeating the simulations with smaller scoring bins improved the resolution of the regions of dose enhancement and dose reduction, especially near the magnetic field boundaries where the dose deposition can abruptly increase or decrease. This study also investigates the effect of a magnetic field on the low-energy electron spectrum that may correspond to a change in the radiobiological effectiveness (RBE). Simulations show that the increase in dose is achieved predominantly through the lower energy electron population.

Nettelbeck, H.; Takacs, G. J.; Rosenfeld, A. B.

2008-09-01

114

Dynamic Monte-Carlo modeling of hydrogen isotope reactivediffusive transport in porous graphite

Dynamic Monte-Carlo modeling of hydrogen isotope reactiveÂdiffusive transport in porous graphite R in a fusion reactor. It is important to study the recycling and mixing of these hydrogen isotopes in graphite are used to study the reactiveÂdiffusive transport of hydrogen isotopes and interstitial carbon atoms

Nordlund, Kai

115

A 3D Monte Carlo code for plasma transport in island divertors

NASA Astrophysics Data System (ADS)

A fully 3D self-consistent Monte Carlo code EMC3 (edge Monte Carlo 3D) for modelling the plasma transport in island divertors has been developed. In a first step, the code solves a simplified version of the 3D time-independent plasma fluid equations. Coupled to the neutral transport code EIRENE, the EMC3 code has been used to study the particle, energy and neutral transport in W7-AS island divertor configurations. First results are compared with data from different diagnostics (Langmuir probes, H ? cameras and thermography).

Feng, Y.; Sardei, F.; Kisslinger, J.; Grigull, P.

1997-02-01

116

Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.

The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area. PMID:17183143

Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat

2007-01-01

117

Photonic transport control by spin-optical disordered metasurface

Photonic metasurfaces are ultrathin electromagnetic wave-molding metamaterials providing the missing link for the integration of nanophotonic chips with nanoelectronic circuits. An extra twist in this field originates from spin-optical metasurfaces providing the photon spin (polarization helicity) as an additional degree of freedom in light-matter interactions at the nanoscale. Here we report on a generic concept to control the photonic transport by disordered (random) metasurfaces with a custom-tailored geometric phase. This approach combines the peculiarity of random patterns to support extraordinary information capacity within the intrinsic limit of speckle noise, and the optical spin control in the geometric phase mechanism, simply implemented in two-dimensional structured matter. By manipulating the local orientations of anisotropic optical nanoantennas, we observe spin-dependent near-field and free-space open channels, generating state-of-the-art multiplexing and interconnects. Spin-optical disordered m...

Veksler, Dekel; Ozeri, Dror; Shitrit, Nir; Kleiner, Vladimir; Hasman, Erez

2014-01-01

118

LDRD project 151362 : low energy electron-photon transport.

At sufficiently high energies, the wavelengths of electrons and photons are short enough to only interact with one atom at time, leading to the popular %E2%80%9Cindependent-atom approximation%E2%80%9D. We attempted to incorporate atomic structure in the generation of cross sections (which embody the modeled physics) to improve transport at lower energies. We document our successes and failures. This was a three-year LDRD project. The core team consisted of a radiation-transport expert, a solid-state physicist, and two DFT experts.

Kensek, Ronald Patrick; Hjalmarson, Harold Paul; Magyar, Rudolph J.; Bondi, Robert James; Crawford, Martin James

2013-09-01

119

Multidimensional electron-photon transport with standard discrete ordinates codes

A method is described for generating electron cross sections that are compatible with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electronphoton transport problems.

Drumm, C.R.

1995-12-31

120

We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the ?erenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequent optical photon transport, a dynamic coupled process that is not well-described by any current MC framework. The results of validation simulations show excellent agreement with currently employed biomedical optics MC codes, [i.e., Monte Carlo for Multi-Layered media (MCML), Mesh-based Monte Carlo (MMC), and diffusion theory], and examples relevant to recent studies into detection of ?erenkov light from an external radiation beam or radionuclide are presented. While the work presented within this paper focuses on radiation-induced light transport, the core features and robust flexibility of the plug-in modified package make it also extensible to more conventional biomedical optics simulations. The plug-in, user guide, example files, as well as the necessary files to reproduce the validation simulations described within this paper are available online at http://www.dartmouth.edu/optmed/research-projects/monte-carlo-software. PMID:23667790

Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.

2013-01-01

121

NASA Technical Reports Server (NTRS)

The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.

Jordan, T. M.

1970-01-01

122

PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

Iandola, F N; O'Brien, M J; Procassini, R J

2010-11-29

123

Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

Miller, S.G.

1988-08-01

124

Photon propagation correction in 3D photoacoustic image reconstruction using Monte Carlo simulation

NASA Astrophysics Data System (ADS)

Purpose: The purpose of this study is to develop a new 3-D iterative Monte Carlo algorithm to recover the heterogeneous distribution of molecular absorbers with a solid tumor. Introduction: Spectroscopic imaging (PCT-S) has the potential to identify a molecular species and quantify its concentration with high spatial fidelity. To accomplish this task, tissue attenuation losses during photon propagation in heterogeneous 3D objects is necessary. An iterative recovery algorithm has been developed to extract 3D heterogeneous parametric maps of absorption coefficients implementing a MC algorithm based on a single source photoacoustic scanner and to determine the influence of the reduced scattering coefficient on the uncertainty of recovered absorption coefficient. Material and Methods: This algorithm is tested for spheres and ellipsoids embedded in simulated mouse torso with optical absorption values ranging from 0.01-0.5/cm, for the same objects where the optical scattering is unknown (?s'=7-13/cm), and for a heterogeneous distribution of absorbers. Results: Systemic and statistical errors in ma with a priori knowledge of ?s' and g are <2% (sphere) and <4% (ellipsoid) for all ma and without a priori knowledge of ms' is <3% and <6%. For heterogenenous distributions of ma, errors are <4% and <5.5% for each object with a prior knowledge of ms' and g, and to 7 and 14% when ?s' varied from 7-13/cm. Conclusions: A Monte Carlo code has been successfully developed and used to correct for photon propagation effects in simulated objects consistent with tumors.

Cheong, Yaw Jye; Stantz, Keith M.

2010-02-01

125

Two-photon correlation and photon transport in disordered passive parity-time-symmetric lattices

NASA Astrophysics Data System (ADS)

Two-photon correlation and photon transport in periodic and disordered passive parity-time-symmetric lattices are studied. A transition on two-photon quantum correlation is observed with the increase of the loss coefficient ? in such non-Hermitian lattices, and the introduction of lattice disorder prompts the occurrence of the transition. The unique loss-enhanced transmission and the associated critical point of loss coefficient ?T of the parity-time-symmetric lattice are also modified significantly by introducing lattice disorder. Our results show that the critical point ?T is brought forward and the loss-enhanced transmission effect is enhanced in the non-Hermitian lattices with off-diagonal lattice disorders, while the critical point ?T is delayed and the loss-enhanced transmission effect is suppressed by introducing diagonal disorder into the non-Hermitian lattices.

Xu, Lei; Dou, Yiling; Bo, Fang; Xu, Jingjun; Zhang, Guoquan

2015-02-01

126

Small fields where electronic equilibrium is not achieved are becoming increasingly important in clinical practice. These complex situations give rise to problems and inaccuracies in both dosimetry and analytical/empirical dose calculation, and therefore require other than conventional methods. A natural diamond detector and a Markus parallel plate ionization chamber have been selected for clinical dosimetry in 6 MV photon beams. Results of simulations using the Monte Carlo system BEAM/EGS4 to model the beam geometry have been compared with dose measurements. A modification of the existing component module for multileaf collimators (MLCs) allowed the modeling of a linear accelerator SL 25 (Elekta Oncology Systems) equipped with a MLC with curved leaf-ends. A mechanical measurement method with spacer plates and a light-field edge detection technique are described as methods to obtain geometrical data of collimator openings for application in the Monte Carlo system. Generally a good agreement is found between measurements and calculations of depth dose distributions and deviations are typically less than 1%. Calculated lateral dose profiles slightly exceed measured dose distributions near the higher level of the penumbras for a 10x2 cm2 field, but agree well with the measurements for all other cases. The simulations are also able to predict variations of output factors and ratios of output factors as a function of field width and field-offset. The Monte Carlo results demonstrate that qualitative changes in energy spectra are too small to explain these variations and that especially geometrical factors affect the output factors and depth dose curves and profiles. PMID:10505876

De Vlamynck, K; Palmans, H; Verhaegen, F; De Wagter, C; De Neve, W; Thierens, H

1999-09-01

127

Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a

Carey E. Floyd; R. E. Coleman; R. J. Jaszczak

1985-01-01

128

This review presents in a comprehensive and tutorial form the basic principles of the Monte Carlo method, as applied to the solution of transport problems in semiconductors. Sufficient details of a typical Monte Carlo simulation have been given to allow the interested reader to create his own Monte Carlo program, and the method has been briefly compared with alternative theoretical

Carlo Jacoboni; Lino Reggiani

1983-01-01

129

Incorporation of Monte Carlo electron interface studies into photon general cavity theory

Electron Monte Carlo calculations using CYLTRAN and a new PHSECE (Photon Produced Secondary Electrons) technique were carried out to estimate electron fluences and energy deposition profiles near LiF/Al and LiF/Pb material interfaces undergoing Co-60 gamma irradiation. Several interesting and new features emerge: (1) Although the build-up of the secondary electron fluences at the interfaces of the irradiated media is approximately exponential, the value of the electron mass fluence build-up coefficient, ..beta.. is not equal to the electron mass fluence attenuation coefficient, ..beta../sub A/. (2) The attenuation of the gamma-generated electron fluences at the cavity-medium interfaces, ..beta../sub A/, is strongly dependent on the Z of the adjacent material, and (3) for LiF/Pb there is a significant intrusion energy deposition mode arising from side-scattering in the wall (Pb) material. These new features of interface dosimetry (at least (1) and (2)) are incorporated into the photon general cavity expressions of Burlin-Horowitz and Kearsley and compared with experimental data. 9 references, 4 figures.

Horowitz, Y.S.; Moscovitch, M.; Hsu, H.; Mack, J.M.; Kearsley, E.

1985-01-01

130

Fast perturbation Monte Carlo method for photon migration in heterogeneous turbid media.

We present a two-step Monte Carlo (MC) method that is used to solve the radiative transfer equation in heterogeneous turbid media. The method exploits the one-to-one correspondence between the seed value of a random number generator and the sequence of random numbers. In the first step, a full MC simulation is run for the initial distribution of the optical properties and the "good" seeds (the ones leading to detected photons) are stored in an array. In the second step, we run a new MC simulation with only the good seeds stored in the first step, i.e., we propagate only detected photons. The effect of a change in the optical properties is calculated in a short time by using two scaling relationships. By this method we can increase the speed of a simulation up to a factor of 1300 in typical situations found in near-IR tissue spectroscopy and diffuse optical tomography, with a minimal requirement for hard disk space. Potential applications of this method for imaging of turbid media and the inverse problem are discussed. PMID:21633460

Sassaroli, Angelo

2011-06-01

131

Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.

Densmore, Jeffery D., E-mail: jdd@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Thompson, Kelly G., E-mail: kgt@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Urbatsch, Todd J., E-mail: tmonster@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States)

2012-08-15

132

shield — universal Monte Carlo hadron transport code: scope and applications

shield is a transport code for simulation of hadron cascades in complex extended targets of arbitrary geometric configuration and chemical composition in the energy range up to 1 TeV. Transport of nucleons, pions, kaons, antinucleons, and muons is considered. Recently the transfer of ions (arbitrary A,Z-nuclei) was included. Hadron–nucleus and nucleus–nucleus interactions inside the target are simulated in exclusive approach

A. V Dementyev; N. M Sobolevsky

1999-01-01

133

The purpose of this study is to calculate correction factors for plastic water (PW) and plastic water diagnostic-therapy (PWDT) phantoms in clinical photon and electron beam dosimetry using the EGSnrc Monte Carlo code system. A water-to-plastic ionization conversion factor k{sub pl} for PW and PWDT was computed for several commonly used Farmer-type ionization chambers with different wall materials in the range of 4-18 MV photon beams. For electron beams, a depth-scaling factor c{sub pl} and a chamber-dependent fluence correction factor h{sub pl} for both phantoms were also calculated in combination with NACP-02 and Roos plane-parallel ionization chambers in the range of 4-18 MeV. The h{sub pl} values for the plane-parallel chambers were evaluated from the electron fluence correction factor {phi}{sub pl}{sup w} and wall correction factors P{sub wall,w} and P{sub wall,pl} for a combination of water or plastic materials. The calculated k{sub pl} and h{sub pl} values were verified by comparison with the measured values. A set of k{sub pl} values computed for the Farmer-type chambers was equal to unity within 0.5% for PW and PWDT in photon beams. The k{sub pl} values also agreed within their combined uncertainty with the measured data. For electron beams, the c{sub pl} values computed for PW and PWDT were from 0.998 to 1.000 and from 0.992 to 0.997, respectively, in the range of 4-18 MeV. The {phi}{sub pl}{sup w} values for PW and PWDT were from 0.998 to 1.001 and from 1.004 to 1.001, respectively, at a reference depth in the range of 4-18 MeV. The difference in P{sub wall} between water and plastic materials for the plane-parallel chambers was 0.8% at a maximum. Finally, h{sub pl} values evaluated for plastic materials were equal to unity within 0.6% for NACP-02 and Roos chambers. The h{sub pl} values also agreed within their combined uncertainty with the measured data. The absorbed dose to water from ionization chamber measurements in PW and PWDT plastic materials corresponds to that in water within 1%. Both phantoms can thus be used as a substitute for water for photon and electron dosimetry.

Araki, Fujio; Hanyu, Yuji; Fukuoka, Miyoko; Matsumoto, Kenji; Okumura, Masahiko; Oguchi, Hiroshi [Department of Radiological Technology, Kumamoto University School of Health Sciences, 4-24-1, Kuhonji, Kumamoto, 862-0976 (Japan); Division of Radiation Oncology, Tokyo Women's Medical University Hospital, Tokyo, 162-8666 (Japan); Department of Central Radiology, Kinki University Hospital, Osaka, 589-8511 (Japan); Department of Central Radiology, Shinshu University Hospital, Matsumoto, 390-8621 (Japan)

2009-07-15

134

MCML—Monte Carlo modeling of light transport in multi-layered tissues

A Monte Carlo model of steady-state light transport in multi-layered tissues (MCML) has been coded in ANSI Standard C; therefore, the program can be used on various computers. Dynamic data allocation is used for MCML, hence the number of tissue layers and grid elements of the grid system can be varied by users at run time. The coordinates of the

Lihong Wang; Steven L. Jacques; Liqiong Zheng

1995-01-01

135

Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation

An exponentially-convergent Monte Carlo (ECMC) method is analyzed using the one-group, one-dimension, slab-geometry transport equation. The method is based upon the use of a linear discontinuous finite-element trial space in position and direction...

Peterson, Jacob Ross

2014-04-23

136

Response matrix Monte Carlo based on a general geometry local calculation for electron transport

A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs.

Ballinger, C.T.; Rathkopf, J.A. (Lawrence Livermore National Lab., CA (USA)); Martin, W.R. (Michigan Univ., Ann Arbor, MI (USA). Dept. of Nuclear Engineering)

1991-01-01

137

NASA Astrophysics Data System (ADS)

MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

2014-06-01

138

In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic- conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non- Gaussian behavior of the mean cloud, are reported on as well.

Naff, R.L.; Haley, D.F.; Sudicky, E.A.

1998-01-01

139

Monte Carlo path sampling approach to modeling aeolian sediment transport

NASA Astrophysics Data System (ADS)

Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.

Hardin, E. J.; Mitasova, H.; Mitas, L.

2011-12-01

140

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.

Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.

2008-10-31

141

In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy. PMID:23085901

Doronin, Alexander; Meglinski, Igor

2012-09-01

142

Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

NASA Astrophysics Data System (ADS)

In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

Doronin, Alexander; Meglinski, Igor

2012-09-01

143

Estimation of gamma- and X-ray photons buildup factor in soft tissue with Monte Carlo method

Buildup factor of gamma- and X-ray photons in the energy range 0.2–2MeV in water and soft tissue is computed using Monte Carlo method. The results are compared with the existing buildup factor data of pure water. The difference between soft tissue and water buildup factor is studied. Soft tissue is assumed to have a composition as H63C6O28N. The importance of

Dariush Sardari; Ali Abbaspour; Samaneh Baradaran; Farshid Babapour

2009-01-01

144

We study the photon-photon correlation properties of two-photon transport in a one-dimensional waveguide coupled to a nonlinear cavity via a real-space approach. It is shown that the intrinsic dissipation of the nonlinear cavity has an important effect upon the correlation of the transported photons. More importantly, strongly correlated photons can be obtained in the transmitted photons even when the nonlinear interaction strength is weak in the cavity. The strong photon-photon correlation is induced by the Fano resonance involving destructive interference between the plane wave and bound state for two-photon transport.

Xun-Wei Xu; Yong Li

2014-07-23

145

Delocalization of electrons by cavity photons in transport through a quantum dot molecule

NASA Astrophysics Data System (ADS)

We present results on cavity-photon-assisted electron transport through two lateral quantum dots embedded in a finite quantum wire. The double quantum dot system is weakly connected to two leads and strongly coupled to a single quantized photon cavity mode with initially two linearly polarized photons in the cavity. Including the full electron-photon interaction, the transient current controlled by a plunger-gate in the central system is studied by using quantum master equation. Without a photon cavity, two resonant current peaks are observed in the range selected for the plunger gate voltage: The ground state peak, and the peak corresponding to the first-excited state. The current in the ground state is higher than in the first-excited state due to their different symmetry. In a photon cavity with the photon field polarized along or perpendicular to the transport direction, two extra side peaks are found, namely, photon-replica of the ground state and photon-replica of the first-excited state. The side-peaks are caused by photon-assisted electron transport, with multiphoton absorption processes for up to three photons during an electron tunneling process. The inter-dot tunneling in the ground state can be controlled by the photon cavity in the case of the photon field polarized along the transport direction. The electron charge is delocalized from the dots by the photon cavity. Furthermore, the current in the photon-induced side-peaks can be strongly enhanced by increasing the electron-photon coupling strength for the case of photons polarized along the transport direction.

Abdullah, Nzar Rauf; Tang, Chi-Shung; Manolescu, Andrei; Gudmundsson, Vidar

2014-11-01

146

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

147

Electron transport in radiotherapy using local-to-global Monte Carlo

Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ``steps`` to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given.

Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A. [Lawrence Livermore National Lab., CA (United States); Ballinger, C.T. [Albany Medical Center, Albany, NY (United States). Dept. of Radiation Oncology; Neuenschwander, H. [Bern Univ. (Switzerland). Dept. of Medical Radiation Physics; Mackie, T.R.; Reckwerdt, P.J. [Univ. of Wisconsin-Madison, Madison, WI (United States)

1994-09-01

148

Data decomposition of Monte Carlo particle transport simulations via tally servers

An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.

Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)] [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)

2013-11-01

149

A Novel Implementation of Massively Parallel Three Dimensional Monte Carlo Radiation Transport

NASA Astrophysics Data System (ADS)

The goal of our summer project was to implement the difference formulation for radiation transport into Cosmos++, a multidimensional, massively parallel, magneto hydrodynamics code for astrophysical applications (Peter Anninos - AX). The difference formulation is a new method for Symbolic Implicit Monte Carlo thermal transport (Brooks and Szöke - PAT). Formerly, simultaneous implementation of fully implicit Monte Carlo radiation transport in multiple dimensions on multiple processors had not been convincingly demonstrated. We found that a combination of the difference formulation and the inherent structure of Cosmos++ makes such an implementation both accurate and straightforward. We developed a "nearly nearest neighbor physics" technique to allow each processor to work independently, even with a fully implicit code. This technique coupled with the increased accuracy of an implicit Monte Carlo solution and the efficiency of parallel computing systems allows us to demonstrate the possibility of massively parallel thermal transport. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48

Robinson, P. B.; Peterson, J. D. L.

2005-12-01

150

Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ``Local`` Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ``local`` biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

Baker, R.S. [Los Alamos National Lab., NM (United States); Larsen, E.W. [Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering

1992-08-01

151

Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

Baker, R.S. (Los Alamos National Lab., NM (United States)); Larsen, E.W. (Michigan Univ., Ann Arbor, MI (United States). Dept. of Nuclear Engineering)

1992-01-01

152

NASA Astrophysics Data System (ADS)

The variations of depth and surface dose on the bone heterogeneity and beam angle were compared between unflattened and flattened photon beams using Monte Carlo simulations. Phase-space files of the 6 MV photon beams with field size of 10×10 cm2 were generated with and without the flattening filter based on a Varian TrueBeam linac. Depth and surface doses were calculated in a bone and water phantoms using Monte Carlo simulations (the EGSnrc-based code). Dose calculations were repeated with angles of the unflattened and flattened beams turned from 0° to 15°, 30°, 45°, 60°, 75° and 90° in the bone and water phantoms. Monte Carlo results of depth doses showed that compared to the flattened beam the unflattened photon beam had a higher dose in the build-up region but lower dose beyond the depth of maximum dose. Dose ratios of the unflattened to flattened beams were calculated in the range of 1.6-2.6 with beam angle varying from 0° to 90° in water. Similar results were found in the bone phantom. In addition, higher surface doses of about 2.5 times were found with beam angles equal to 0° and 15° in the bone and water phantoms. However, surface dose deviation between the unflattened and flattened beams became smaller with increasing beam angle. Dose enhancements due to the bone backscatter were also found at the water-bone and bone-water interfaces for both the unflattened and flattened beams in the bone phantom. With Monte Carlo beams cross-calibrated to the monitor unit in simulations, variations of depth and surface dose on the bone heterogeneity and beam angle were investigated and compared using Monte Carlo simulations. For the unflattened and flattened photon beams, the surface dose and range of depth dose ratios (unflattened to flattened beam) decreased with increasing beam angle. The dosimetric comparison in this study is useful in understanding the characteristics of unflattened photon beam on the depth and surface dose with bone heterogeneity.

Chow, James C. L.; Owrangi, Amir M.

2014-08-01

153

NASA Astrophysics Data System (ADS)

BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette technique used in conjunction with UBS is substantially modified to make simulations additionally 2 times more efficient. Finally, a novel and robust technique, called selective bremsstrahlung splitting (SBS), is developed and shown to improve the efficiency of photon beam simulations by an additional factor of 3-4, depending on the end- point considered. The optimized BEAM code is benchmarked by comparing calculated and measured ionization distributions in water from the 10 and 20 MV photon beams of the NRCC linac. Unlike previous calculations, the incident e - energy is known independently to 1%, the entire extra-focal radiation is simulated and e- contamination is accounted for. Both beams use clinical jaws, whose dimensions are accurately measured, and which are set for a 10 x 10 cm2 field at 110 cm. At both energies, the calculated and the measured values of ionization on the central-axis in the buildup region agree within 1% of maximum dose. The agreement is well within statistics elsewhere on the central-axis. Ionization profiles match within 1% of maximum dose, except at the geometrical edges of the field, where the disagreement is up to 5% of dose maximum. Causes for this discrepancy are discussed. The benchmarked BEAM code is then used to simulate beams from the major commercial medical linear accelerators. The off-axis factors are matched within statistical uncertainties, for most of the beams at the 1 ? level and for all at the 2 ? level. The calculated and measured depth-dose data agree within 1% (local dose), at about 1% (1 ? level) statistics, at all depths past depth of maximum dose for almost all beams. The calculated photon spectra and average energy distributions are compared to those published by Mohan et al. and decomposed into direct and scattered photon components.

Sheikh-Bagheri, Daryoush

1999-12-01

154

Effective source term in the diffusion equation for photon transport in turbid media

Effective source term in the diffusion equation for photon transport in turbid media Sergio Fantini used to describe photon transport in turbid media. We have performed a series of spectroscopy experiments on a number of uniform turbid media with different optical properties absorption coefficient

155

Minimizing the cost of splitting in Monte Carlo radiation transport simulation

NASA Astrophysics Data System (ADS)

A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated) as the product of sample variance and time per particle history, sigma (2)/sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

Juzaitis, R. J.

1980-10-01

156

Minimizing the cost of splitting in Monte Carlo radiation transport simulation

A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).

Juzaitis, R.J.

1980-10-01

157

Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport

Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs.

Lynch, J.E.

1985-01-01

158

Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields

NASA Astrophysics Data System (ADS)

The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors \\Omega ^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}. The aim of this study is the determination of the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}}; this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} is of the order of 1.2 at a field size of 1 × 1 cm2 for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm2, but below this field size there is a steep decrease of k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm2, i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k^{f_{clin} ,f_{msr}}_{Q_{clin} ,Q_{msr}} is closest to unity in small fields and mainly independent of the electron beam spot size.

Czarnecki, D.; Zink, K.

2013-04-01

159

The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a (60)Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4-1.1% across the range of calibration energies compared to the current calibration method. PMID:25565406

Wright, Tracy; Lye, Jessica E; Ramanathan, Ganesan; Harty, Peter D; Oliver, Chris; Webb, David V; Butler, Duncan J

2015-01-21

160

NASA Astrophysics Data System (ADS)

The Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) has established a method for ionisation chamber calibrations using megavoltage photon reference beams. The new method will reduce the calibration uncertainty compared to a 60Co calibration combined with the TRS-398 energy correction factor. The calibration method employs a graphite calorimeter and a Monte Carlo (MC) conversion factor to convert the absolute dose to graphite to absorbed dose to water. EGSnrc is used to model the linac head and doses in the calorimeter and water phantom. The linac model is validated by comparing measured and modelled PDDs and profiles. The relative standard uncertainties in the calibration factors at the ARPANSA beam qualities were found to be 0.47% at 6?MV, 0.51% at 10?MV and 0.46% for the 18?MV beam. A comparison with the Bureau International des Poids et Mesures (BIPM) as part of the key comparison BIPM.RI(I)-K6 gave results of 0.9965(55), 0.9924(60) and 0.9932(59) for the 6, 10 and 18?MV beams, respectively, with all beams within 1? of the participant average. The measured kQ values for an NE2571 Farmer chamber were found to be lower than those in TRS-398 but are consistent with published measured and modelled values. Users can expect a shift in the calibration factor at user energies of an NE2571 chamber between 0.4–1.1% across the range of calibration energies compared to the current calibration method.

Wright, Tracy; Lye, Jessica E.; Ramanathan, Ganesan; Harty, Peter D.; Oliver, Chris; Webb, David V.; Butler, Duncan J.

2015-01-01

161

Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program

This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems.

Moskowitz, B.S.

2000-02-01

162

Monte Carlo simulation of hot electron transport in deep submicron SOI MOSFET

NASA Astrophysics Data System (ADS)

Ensemble Monte-Carlo simulation of electron and hole transport in deep submicron n-channel SOI MOSFET with 100 nm channel length is performed. The influence of interband impact ionization process on the transistor characteristics is investigated within the framework of Keldysh impact ionization model. Effective threshold energy of electron impact ionization as a parameter characterizing the process is calculated. The dependence of the effective threshold energy on the drain bias is determined.

Borzdov, A. V.; Borzdov, V. M.; V'yurkov, V. V.

2014-12-01

163

Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

NASA Astrophysics Data System (ADS)

A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport. The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas. The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code, which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks. The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

Yu, Deliang; Yan, Longwen; Zhong, Guangwu; Lu, Jie; Yi, Ping

2007-04-01

164

Time-implicit Monte-Carlo collision algorithm for particle-in-cell electron transport models

A time-implicit Monte-Carlo collision algorithm has been developed to allow particle-in-cell electron transport models to be applied to arbitrarily collisional systems. The algorithm is formulated for electrons moving in response to electric and magnetic accelerations and subject to collisional drag and scattering due to a background plasma. The correct fluid or streaming transport results are obtained in the respective limits of strongly- or weakly-collisional systems, and reasonable behavior is produced even for time steps greatly exceeding the magnetic-gyration and collisional-scattering times.

Cranfill, C.W.; Brackbill, J.U.; Goldman, S.R.

1985-01-01

165

Exponentially-convergent Monte Carlo for the 1-D transport equation

We define a new exponentially-convergent Monte Carlo method for solving the one-speed 1-D slab-geometry transport equation. This method is based upon the use of a linear discontinuous finite-element trial space in space and direction to represent the transport solution. A space-direction h-adaptive algorithm is employed to restore exponential convergence after stagnation occurs due to inadequate trial-space resolution. This methods uses jumps in the solution at cell interfaces as an error indicator. Computational results are presented demonstrating the efficacy of the new approach. (authors)

Peterson, J. R.; Morel, J. E.; Ragusa, J. C. [Department of Nuclear Engineering, TAMU 3133, Texas A and M University, College Station, TX 77843-3133 (United States)

2013-07-01

166

A VAX version of the coupled Monte Carlo transport codes HETC and MORSE-CGA

The three-dimensional Monte Carlo transport codes, HETC and MORSE-CGA, are distributed by the Radiation Shielding Information Center at Oak Ridge National Laboratory. These codes, written for IBM-3033 computers, have been installed on the Environmental Measurements Laboratory's VAX/11-750 computer for operation in a coupled mode to study the transport of neutrons over the energy range from thermal to several GeV. This report is a guide to their use on the VAX/11-750 computer. 26 refs., 6 figs., 14 tabs.

Sanna, R.S.

1990-12-01

167

A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.

Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)

1997-05-01

168

Photon energy-modulated radiotherapy: Monte Carlo simulation and treatment planning study

Purpose: To demonstrate the feasibility of photon energy-modulated radiotherapy during beam-on time. Methods: A cylindrical device made of aluminum was conceptually proposed as an energy modulator. The frame of the device was connected with 20 tubes through which mercury could be injected or drained to adjust the thickness of mercury along the beam axis. In Monte Carlo (MC) simulations, a flattening filter of 6 or 10 MV linac was replaced with the device. The thickness of mercury inside the device varied from 0 to 40 mm at the field sizes of 5 x 5 cm{sup 2} (FS5), 10 x 10 cm{sup 2} (FS10), and 20 x 20 cm{sup 2} (FS20). At least 5 billion histories were followed for each simulation to create phase space files at 100 cm source to surface distance (SSD). In-water beam data were acquired by additional MC simulations using the above phase space files. A treatment planning system (TPS) was commissioned to generate a virtual machine using the MC-generated beam data. Intensity modulated radiation therapy (IMRT) plans for six clinical cases were generated using conventional 6 MV, 6 MV flattening filter free, and energy-modulated photon beams of the virtual machine. Results: As increasing the thickness of mercury, Percentage depth doses (PDD) of modulated 6 and 10 MV after the depth of dose maximum were continuously increased. The amount of PDD increase at the depth of 10 and 20 cm for modulated 6 MV was 4.8% and 5.2% at FS5, 3.9% and 5.0% at FS10 and 3.2%-4.9% at FS20 as increasing the thickness of mercury from 0 to 20 mm. The same for modulated 10 MV was 4.5% and 5.0% at FS5, 3.8% and 4.7% at FS10 and 4.1% and 4.8% at FS20 as increasing the thickness of mercury from 0 to 25 mm. The outputs of modulated 6 MV with 20 mm mercury and of modulated 10 MV with 25 mm mercury were reduced into 30%, and 56% of conventional linac, respectively. The energy-modulated IMRT plans had less integral doses than 6 MV IMRT or 6 MV flattening filter free plans for tumors located in the periphery while maintaining the similar quality of target coverage, homogeneity, and conformity. Conclusions: The MC study for the designed energy modulator demonstrated the feasibility of energy-modulated photon beams available during beam-on time. The planning study showed an advantage of energy-and intensity modulated radiotherapy in terms of integral dose without sacrificing any quality of IMRT plan.

Park, Jong Min; Kim, Jung-in; Heon Choi, Chang; Chie, Eui Kyu; Kim, Il Han; Ye, Sung-Joon [Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744, Korea and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of) and Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of); Interdiciplinary Program in Radiation Applied Life Science, Seoul National University, Seoul, 110-744 (Korea, Republic of); Department of Radiation Oncology, Seoul National University Hospital, Seoul, 110-744 (Korea, Republic of) and Department of Intelligent Convergence Systems, Seoul National University, Seoul, 151-742 (Korea, Republic of)

2012-03-15

169

NASA Astrophysics Data System (ADS)

An electron-photon coupled Monte Carlo code ARCHER -

Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

2014-06-01

170

Purpose: The impact of photon beam energy and tissue heterogeneities on dose distributions and dosimetric characteristics such as point dose, mean dose, and maximum dose was investigated in the context of small-animal irradiation using Monte Carlo simulations based on the EGSnrc code. Methods: Three Monte Carlo mouse phantoms, namely, heterogeneous, homogeneous, and bone homogeneous were generated based on the same mouse computed tomography image set. These phantoms were generated by overriding the tissue type of none of the voxels (heterogeneous), all voxels (homogeneous), and only the bone voxels (bone homogeneous) to that of soft tissue. Phase space files of the 100 and 225 kVp photon beams based on a small-animal irradiator (XRad225Cx, Precision X-Ray Inc., North Branford, CT) were generated using BEAMnrc. A 360 deg. photon arc was simulated and three-dimensional (3D) dose calculations were carried out using the DOSXYZnrc code through DOSCTP in the above three phantoms. For comparison, the 3D dose distributions, dose profiles, mean, maximum, and point doses at different locations such as the isocenter, lung, rib, and spine were determined in the three phantoms. Results: The dose gradient resulting from the 225 kVp arc was found to be steeper than for the 100 kVp arc. The mean dose was found to be 1.29 and 1.14 times higher for the heterogeneous phantom when compared to the mean dose in the homogeneous phantom using the 100 and 225 kVp photon arcs, respectively. The bone doses (rib and spine) in the heterogeneous mouse phantom were about five (100 kVp) and three (225 kVp) times higher when compared to the homogeneous phantom. However, the lung dose did not vary significantly between the heterogeneous, homogeneous, and bone homogeneous phantom for the 225 kVp compared to the 100 kVp photon beams. Conclusions: A significant bone dose enhancement was found when the 100 and 225 kVp photon beams were used in small-animal irradiation. This dosimetric effect, due to the presence of the bone heterogeneity, was more significant than that due to the lung heterogeneity. Hence, for kV photon energies of the range used in small-animal irradiation, the increase of the mean and bone dose due to the photoelectric effect could be a dosimetric concern.

Chow, James C. L.; Leung, Michael K. K.; Lindsay, Patricia E.; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Radiation Medicine Program, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Physics and Ontario Cancer Institute, Princess Margaret Hospital, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Radiation Oncology and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

2010-10-15

171

A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

Jabbari, Keyvan; Seuntjens, Jan

2014-07-01

172

MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES

A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.

Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)

2013-08-15

173

A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

Jabbari, Keyvan; Seuntjens, Jan

2014-01-01

174

The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

2014-12-29

175

A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

Schach Von Wittenau, Alexis E. (Livermore, CA)

2003-01-01

176

AlfaMC: a fast alpha particle transport Monte Carlo code

AlfaMC is a Monte Carlo simulation code for the transport of alpha particles. The code is based on the Continuous Slowing Down Approximation and uses the NIST/ASTAR stopping-power database. The code uses a powerful geometrical package allowing the coding of complex geometries. A flexible histogramming package is used which greatly easies the scoring of results. The code is tailored for microdosimetric applications where speed is a key factor. The code is open-source and released under the General Public Licence.

Peralta, Luis

2012-01-01

177

Monte Carlo simulation of phonon transport in silicon including a realistic dispersion relation

NASA Astrophysics Data System (ADS)

Thermal conductivities in bulk Si and Si films are analyzed using a Monte Carlo method to solve the phonon Boltzmann transport equation. By taking into account the realistic phonon dispersion relation calculated from the adiabatic bond charge model along with pure diffuse boundary scattering based on Lambert's law, simulated results that were in good agreement with the experimental ones were obtained. In addition, it was found that the approximated dispersion curves fitted along the [100] direction underestimate the density of states for mobile phonons, which results in a smaller specific heat and a longer phonon mean free path. The resulting impact on the simulation of heat transfer in nanostructures is discussed.

Kukita, K.; Kamakura, Y.

2013-10-01

178

New nuclear data for high-energy all-particle Monte Carlo transport

We are extending the LLNL nuclear data libraries to 250 MeV for neutron and proton interaction with biologically important nuclei; i.e. H, C, N, 0, F, P, and Ca. Because of the large number of reaction channels that open with increasing energies, the data is generated in particle production cross section format with energy-angle correlated distributions for the outgoing particles in the laboratory frame of reference. The new Production Cross Section data Library (PCSL) will be used in PEREGRINE-the new all-particle Monte Carlo transport code being developed at LLNL for dose calculation in radiation therapy planning.

Cox, L.J.; Chadwick, M.B.; Resler, D.A. [Lawrence Livermore National Lab., CA (United States)

1994-12-31

179

Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

NASA Astrophysics Data System (ADS)

This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

Dureau, David; Poëtte, Gaël

2014-06-01

180

Metal-oxide-semiconductor field effect transistor (MOSFET) dosimeters are increasingly utilized in radiation therapy and diagnostic radiology. While it is difficult to characterize the dosimeter responses for monoenergetic sources by experiments, this paper reports a detailed Monte Carlo simulation model of the High-Sensitivity MOSFET dosimeter using Monte Carlo N-Particle (MCNP) 4C. A dose estimator method was used to calculate the dose in the extremely thin sensitive volume. Efforts were made to validate the MCNP model using three experiments: (1) comparison of the simulated dose with the measurement of a Cs-137 source, (2) comparison of the simulated dose with analytical values, and (3) comparison of the simulated energy dependence with theoretical values. Our simulation results show that the MOSFET dosimeter has a maximum response at about 40 keV of photon energy. The energy dependence curve is also found to agree with the predicted value from theory within statistical uncertainties. The angular dependence study shows that the MOSFET dosimeter has a higher response (about 8%) when photons come from the epoxy side, compared with the kapton side for the Cs-137 source. PMID:15191284

Wang, Brian; Kim, Chan-Hyeong; Xua, X George

2004-05-01

181

NASA Technical Reports Server (NTRS)

A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.

Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William

2011-01-01

182

NASA Astrophysics Data System (ADS)

A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.

Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.

2011-02-01

183

The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426

Jacques, Steven L.

2014-01-01

184

The generation of photoacoustic signals for imaging objects embedded within tissues is dependent on how well light can penetrate to and deposit energy within an optically absorbing object, such as a blood vessel. This report couples a 3D Monte Carlo simulation of light transport to stress wave generation to predict the acoustic signals received by a detector at the tissue surface. The Monte Carlo simulation allows modeling of optically heterogeneous tissues, and a simple MATLAB™ acoustic algorithm predicts signals reaching a surface detector. An example simulation considers a skin with a pigmented epidermis, a dermis with a background blood perfusion, and a 500-?m-dia. blood vessel centered at a 1-mm depth in the skin. The simulation yields acoustic signals received by a surface detector, which are generated by a pulsed 532-nm laser exposure before and after inserting the blood vessel. A MATLAB™ version of the acoustic algorithm and a link to the 3D Monte Carlo website are provided. PMID:25426426

Jacques, Steven L

2014-12-01

185

Monte Carlo simulations of transport of the bremsstrahlung produced by relativistic runaway electron avalanches are performed for altitudes up to the orbit altitudes where terrestrial gamma-ray flashes (TGFs) have been detected aboard satellites. The photon flux per runaway electron and angular distribution of photons on a hemisphere of radius similar to that of the satellite orbits are calculated as functions of the source altitude z. The calculations yield general results, which are recommended for use in TGF data analysis. The altitude z and polar angle are determined for which the calculated bremsstrahlung spectra and mean photon energies agree with TGF measurements. The correlation of TGFs with variations of the vertical dipole moment of a thundercloud is analyzed. We show that, in agreement with observations, the detected TGFs can be produced in the fields of thunderclouds with charges much smaller than 100 C and that TGFs are not necessarily correlated with the occurrence of blue jets and red sprites.

Babich, L. P., E-mail: babich@elph.vniief.ru; Donskoy, E. N.; Kutsyk, I. M. [All-Russian Research Institute of Experimental Physics, Russian Federal Nuclear Center (Russian Federation)

2008-07-15

186

Purpose: By using Monte Carlo simulations, the authors investigated the energy and angular dependence of the response of plastic scintillation detectors (PSDs) in photon beams. Methods: Three PSDs were modeled in this study: A plastic scintillator (BC-400) and a scintillating fiber (BCF-12), both attached by a plastic-core optical fiber stem, and a plastic scintillator (BC-400) attached by an air-core optical fiber stem with a silica tube coated with silver. The authors then calculated, with low statistical uncertainty, the energy and angular dependences of the PSDs' responses in a water phantom. For energy dependence, the response of the detectors is calculated as the detector dose per unit water dose. The perturbation caused by the optical fiber stem connected to the PSD to guide the optical light to a photodetector was studied in simulations using different optical fiber materials. Results: For the energy dependence of the PSDs in photon beams, the PSDs with plastic-core fiber have excellent energy independence within about 0.5% at photon energies ranging from 300 keV (monoenergetic) to 18 MV (linac beam). The PSD with an air-core optical fiber with a silica tube also has good energy independence within 1% in the same photon energy range. For the angular dependence, the relative response of all the three modeled PSDs is within 2% for all the angles in a 6 MV photon beam. This is also true in a 300 keV monoenergetic photon beam for PSDs with plastic-core fiber. For the PSD with an air-core fiber with a silica tube in the 300 keV beam, the relative response varies within 1% for most of the angles, except in the case when the fiber stem is pointing right to the radiation source in which case the PSD may over-response by more than 10%. Conclusions: At {+-}1% level, no beam energy correction is necessary for the response of all three PSDs modeled in this study in the photon energy ranges from 200 keV (monoenergetic) to 18 MV (linac beam). The PSD would be even closer to water equivalent if there is a silica tube around the sensitive volume. The angular dependence of the response of the three PSDs in a 6 MV photon beam is not of concern at 2% level.

Wang, Lilie L. W.; Klein, David; Beddar, A. Sam [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States)

2010-10-15

187

NASA Astrophysics Data System (ADS)

The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.

Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio

2012-03-01

188

Monte Carlo simulations of electron transport for electron beam-induced deposition of nanostructures

NASA Astrophysics Data System (ADS)

Tungsten hexacarbonyl, W(CO)6, is a particularly interesting precursor molecule for electron beam-induced deposition of nanoparticles, since it yields deposits whose electronic properties can be tuned from metallic to insulating. However, the growth of tungsten nanostructures poses experimental difficulties: the metal content of the nanostructure is variable. Furthermore, fluctuations in the tungsten content of the deposits seem to trigger the growth of the nanostructure. Monte Carlo simulations of electron transport have been carried out with the radiation-transport code Penelope in order to study the charge and energy deposition of the electron beam in the deposit and in the substrate. These simulations allow us to examine the conditions under which nanostructure growth takes place and to highlight the relevant parameters in the process.

Salvat-Pujol, Francesc; Jeschke, Harald O.; Valenti, Roser

2013-03-01

189

Two multidimensional Monte Carlo simulation codes-(a) neutral (H2,H) transport code and (b) negative ion (H-) transport code-have been developed. This article focuses on the recent simulation results by the neutral transport code for the H- production in a large, hybrid negative ion source, ``Camembert III.'' Two-dimensional spatial profiles of vibrationally excited molecules H2(v) and H- production are obtained for a

A. Hatayama; T. Sakurabayashi; Y. Ishi; K. Makino; M. Ogasawara; M. Bacal

2002-01-01

190

NASA Astrophysics Data System (ADS)

To a large extent, the flow and transport behaviour within a subsurface reservoir is governed by its permeability. Typically, permeability measurements of a subsurface reservoir are affordable at few spatial locations only. Due to this lack of information, permeability fields are preferably described by stochastic models rather than deterministically. A stochastic method is needed to asses the transition of the input uncertainty in permeability through the system of partial differential equations describing flow and transport to the output quantity of interest. Monte Carlo (MC) is an established method for quantifying uncertainty arising in subsurface flow and transport problems. Although robust and easy to implement, MC suffers from slow statistical convergence. To reduce the computational cost of MC, the multilevel Monte Carlo (MLMC) method was introduced. Instead of sampling a random output quantity of interest on the finest affordable grid as in case of MC, MLMC operates on a hierarchy of grids. If parts of the sampling process are successfully delegated to coarser grids where sampling is inexpensive, MLMC can dramatically outperform MC. MLMC has proven to accelerate MC for several applications including integration problems, stochastic ordinary differential equations in finance as well as stochastic elliptic and hyperbolic partial differential equations. In this study, MLMC is combined with a reservoir simulator to assess uncertain two phase (water/oil) flow and transport within a random permeability field. The performance of MLMC is compared to MC for a two-dimensional reservoir with a multi-point Gaussian logarithmic permeability field. It is found that MLMC yields significant speed-ups with respect to MC while providing results of essentially equal accuracy. This finding holds true not only for one specific Gaussian logarithmic permeability model but for a range of correlation lengths and variances.

Müller, Florian; Jenny, Patrick; Daniel, Meyer

2014-05-01

191

Creating and using a type of free-form geometry in Monte Carlo particle transport

While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation.

Wessol, D.E.; Wheeler, F.J. (EG and G Idaho, Inc., Idaho Falls (United States))

1993-04-01

192

The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator?detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ?10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552

Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

2008-01-01

193

An application of Fleck effective scattering to the difference formulation for photon transport

We introduce a new treatment of the difference formulation[1] for photon radiation transport without scattering in 1d slab geometry that is closely analogous to that of Fleck and Cummings[2] for the traditional formulation. The resulting form is free of implicit source terms and has the familiar effective scattering of the field of transport.

Daffin, F

2006-10-16

194

Radiation-induced "zero-resistance state" and the photon-assisted transport.

We demonstrate that the radiation-induced "zero-resistance state" observed in a two-dimensional electron gas is a result of the nontrivial structure of the density of states of the systems and the photon-assisted transport. A toy model of a quantum tunneling junction with oscillatory density of states in leads catches most of the important features of the experiments. We present a generalized Kubo-Greenwood conductivity formula for the photon-assisted transport in a general system and show essentially the same nature of the transport anomaly in a uniform system. PMID:14525265

Shi, Junren; Xie, X C

2003-08-22

195

The Institute for Radiological Protection and Nuclear Safety owns two facilities producing realistic mixed neutron-photon radiation fields, CANEL, an accelerator driven moderator modular device, and SIGMA, a graphite moderated americium-beryllium assembly. These fields are representative of some of those encountered at nuclear workplaces, and the corresponding facilities are designed and used for calibration of various instruments, such as survey meters, personal dosimeters or spectrometric devices. In the framework of the European project EVIDOS, irradiations of personal dosimeters were performed at CANEL and SIGMA. Monte Carlo calculations were performed to estimate the reference values of the personal dose equivalent at both facilities. The Hp(10) values were calculated for three different angular positions, 0 degrees, 45 degrees and 75 degrees, of an ICRU phantom located at the position of irradiation. PMID:17578872

Lacoste, V; Gressier, V

2007-01-01

196

CEPXS/ONELD is the only discrete ordinates code capable of modelling the fully-coupled electron-photon cascade at high energies. Quantities that are related to the particle flux such as dose and charge deposition can readily be obtained. This deterministic code is much faster than comparable Monte Carlo codes. The unique adjoint transport capability of CEPXS/ONELD also enables response functions to be readily calculated. Version 2.0 of the CEPXS/ONELD code package has been designed to allow users who are not expert in discrete ordinates methods to fully exploit the code's capabilities. 14 refs., 15 figs.

Lorence, L.J. Jr.

1991-01-01

197

3D electro-thermal Monte Carlo study of transport in confined silicon devices

NASA Astrophysics Data System (ADS)

The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non-equilibrium carrier dynamics, a key determinant in device performance. The dependence of short-channel effects and Joule heating on the lateral scaling of the cross-section is specifically explored in this work. Finally, this dissertation studies the basic tradeoff between various n-channel multigate architectures with square cross-sectional lengths ranging from 30 nm to 5 nm are presented.

Mohamed, Mohamed Y.

198

Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females. PMID:18697546

Shi, C Y; Xu, X George; Stabin, Michael G

2008-07-01

199

Estimates of radiation absorbed doses from radionuclides internally deposited in a pregnant woman and her fetus are very important due to elevated fetal radiosensitivity. This paper reports a set of specific absorbed fractions (SAFs) for use with the dosimetry schema developed by the Society of Nuclear Medicine's Medical Internal Radiation Dose (MIRD) Committee. The calculations were based on three newly constructed pregnant female anatomic models, called RPI-P3, RPI-P6, and RPI-P9, that represent adult females at 3-, 6-, and 9-month gestational periods, respectively. Advanced Boundary REPresentation (BREP) surface-geometry modeling methods were used to create anatomically realistic geometries and organ volumes that were carefully adjusted to agree with the latest ICRP reference values. A Monte Carlo user code, EGS4-VLSI, was used to simulate internal photon emitters ranging from 10 keV to 4 MeV. SAF values were calculated and compared with previous data derived from stylized models of simplified geometries and with a model of a 7.5-month pregnant female developed previously from partial-body CT images. The results show considerable differences between these models for low energy photons, but generally good agreement at higher energies. These differences are caused mainly by different organ shapes and positions. Other factors, such as the organ mass, the source-to-target-organ centroid distance, and the Monte Carlo code used in each study, played lesser roles in the observed differences in these. Since the SAF values reported in this study are based on models that are anatomically more realistic than previous models, these data are recommended for future applications as standard reference values in internal dosimetry involving pregnant females.

Shi, C. Y.; Xu, X. George; Stabin, Michael G. [Department of Radiation Oncology, University of Texas Health Science Center, San Antonio, Texas 78229 (United States); Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Room 1-11, NES Building, Tibbits Avenue, Troy, New York 12180 (United States); Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, Tennessee 37232-2675 (United States)

2008-07-15

200

Neutrino transport in type II supernovae: Boltzmann solver vs. Monte Carlo method

We have coded a Boltzmann solver based on a finite difference scheme (S_N method) aiming at calculations of neutrino transport in type II supernovae. Close comparison between the Boltzmann solver and a Monte Carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background. We have also investigated in detail the dependence of the results on the numbers of radial, angular, and energy grid points and the way to discretize the spatial advection term which is used in the Boltzmann solver. A general relativistic calculation has been done for one of the models. We find overall good agreement between the two methods. However, because of a relatively small number of angular grid points (which is inevitable due to limitations of the computation time) the Boltzmann solver tends to underestimate the flux factor and the Eddington factor outside the (mean) ``neutrinosphere'' where the angular distribution of the neutrinos becomes highly anisotropic. This fact suggests that one has to be cautious in applying the Boltzmann solver to a calculation of the neutrino heating in the hot-bubble region because it might tend to overestimate the local energy deposition rate. A comparison shows that this trend is opposite to the results obtained with a multi-group flux-limited diffusion approximation of neutrino transport. The accuracy of the Boltzmann solver can be considerably improved by using a variable angular mesh to increase the angular resolution in the semi-transparent regime.

Shoichi Yamada; Hans-Thomas Janka; Hideyuki Suzuki

1998-09-02

201

MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.

Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald [ISDE, Vanderbilt University, 1025 16th Avenue South, Nashville, TN 37212 (United States)

2011-06-01

202

Monte Carlo modeling of transport in PbSe nanocrystal films

A Monte Carlo hopping model was developed to simulate electron and hole transport in nanocrystalline PbSe films. Transport is carried out as a series of thermally activated hopping events between neighboring sites on a cubic lattice. Each site, representing an individual nanocrystal, is assigned a size-dependent electronic structure, and the effects of particle size, charging, interparticle coupling, and energetic disorder on electron and hole mobilities were investigated. Results of simulated field-effect measurements confirm that electron mobilities and conductivities at constant carrier densities increase with particle diameter by an order of magnitude up to 5?nm and begin to decrease above 6?nm. We find that as particle size increases, fewer hops are required to traverse the same distance and that site energy disorder significantly inhibits transport in films composed of smaller nanoparticles. The dip in mobilities and conductivities at larger particle sizes can be explained by a decrease in tunneling amplitudes and by charging penalties that are incurred more frequently when carriers are confined to fewer, larger nanoparticles. Using a nearly identical set of parameter values as the electron simulations, hole mobility simulations confirm measurements that increase monotonically with particle size over two orders of magnitude.

Carbone, I., E-mail: icarbone@ucsc.edu; Carter, S. A. [University of California, Santa Cruz, California 95060 (United States); Zimanyi, G. T. [University of California, Davis, California 95616 (United States)

2013-11-21

203

MCNPX Monte Carlo simulations of particle transport in SiC semiconductor detectors of fast neutrons

NASA Astrophysics Data System (ADS)

The aim of this paper was to investigate particle transport properties of a fast neutron detector based on silicon carbide. MCNPX (Monte Carlo N-Particle eXtended) code was used in our study because it allows seamless particle transport, thus not only interacting neutrons can be inspected but also secondary particles can be banked for subsequent transport. Modelling of the fast-neutron response of a SiC detector was carried out for fast neutrons produced by 239Pu-Be source with the mean energy of about 4.3 MeV. Using the MCNPX code, the following quantities have been calculated: secondary particle flux densities, reaction rates of elastic/inelastic scattering and other nuclear reactions, distribution of residual ions, deposited energy and energy distribution of pulses. The values of reaction rates calculated for different types of reactions and resulting energy deposition values showed that the incident neutrons transfer part of the carried energy predominantly via elastic scattering on silicon and carbon atoms. Other fast-neutron induced reactions include inelastic scattering and nuclear reactions followed by production of ?-particles and protons. Silicon and carbon recoil atoms, ?-particles and protons are charged particles which contribute to the detector response. It was demonstrated that although the bare SiC material can register fast neutrons directly, its detection efficiency can be enlarged if it is covered by an appropriate conversion layer. Comparison of the simulation results with experimental data was successfully accomplished.

Sedla?ková, K.; Zat'ko, B.; Šagátová, A.; Pavlovi?, M.; Ne?as, V.; Stacho, M.

2014-05-01

204

NASA Astrophysics Data System (ADS)

Nuclear heating evaluation by Monte-Carlo simulation requires coupled neutron-photon calculation so as to take into account the contribution of secondary photons. Nuclear data are essential for a good calculation of neutron and photon energy deposition and for secondary photon generation. However, a number of isotopes of the most common nuclear data libraries happen to be affected by energy and/or momentum conservation errors concerning the photon production or inaccurate thresholds for photon emission sections. In this paper, we perform a comprehensive survey of the three evaluations JEFF3.1.1, JEFF3.2T2 (beta version) and ENDF/B-VII.1, over 142 isotopes. The aim of this survey is, on the one hand, to check the existence of photon production data by neutron reaction and, on the other hand, to verify the consistency of these data using the kinematic limits method recently implemented in the TRIPOLI-4 Monte-Carlo code, developed by CEA (Saclay center). Then, the impact of these inconsistencies affecting energy deposition scores has been estimated for two materials using a specific nuclear heating calculation scheme in the context of the OSIRIS Material Testing Reactor (CEA/Saclay).

Péron, A.; Malouch, F.; Zoia, A.; Diop, C. M.

2014-06-01

205

The role of Monte Carlo simulation of electron transport in radiation dosimetry.

A brief overview is given of the role in radiation dosimetry of electron transport simulations using the Monte Carlo technique. Two areas are discussed in some detail. The first is the calculation of stopping-power ratios for use in ion chamber dosimetry. The uncertainty in stopping-power ratios is discussed with attention being drawn to the fact that the relative uncertainty in restricted collision stopping powers is greater than that in unrestricted stopping powers if the major source of uncertainty is the density effect correction. Using ICRU Report 37 stopping powers and electron spectra calculated in a small cylinder of graphite, the value of the Spencer-Attix graphite to air stopping-power ratio in a 60Co beam is found to be 1.0021 for an assumed graphite density of 1.70 g/cm(3) and 0.23% less for an assumed density of 2.26 g/cm(3). The second area discussed is the feasibility of using Monte Carlo techniques to calculate dose patterns in a patient undergoing electron beam radiotherapy. PMID:1661717

Rogers, D W

1991-01-01

206

NASA Astrophysics Data System (ADS)

This review presents in a comprehensive and tutorial form the basic principles of the Monte Carlo method, as applied to the solution of transport problems in semiconductors. Sufficient details of a typical Monte Carlo simulation have been given to allow the interested reader to create his own Monte Carlo program, and the method has been briefly compared with alternative theoretical techniques. Applications have been limited to the case of covalent semiconductors. Particular attention has been paid to the evaluation of the integrated scattering probabilities, for which final expressions are given in a form suitable for their direct use. A collection of results obtained with Monte Carlo simulations is presented, with the aim of showing the power of the method in obtaining physical insights into the processes under investigation. Special technical aspects of the method and updated microscopic models have been treated in some appendixes.

Jacoboni, Carlo; Reggiani, Lino

1983-07-01

207

NASA Astrophysics Data System (ADS)

Research has been conducted using a suite of modeling codes to study LLAGN accretion disks, specifically for Sagittarius A* and M87's core. These include a GRMHD accretion flow evolver, Monte Carlo radiation transport code, and localized shearing box simulations. Modifications to these codes will be discussed, which make them particularly applicable to these types of sources. Results of interest regarding large scale flaring mechanisms in AGN disks, as well as kinetic scale particle heating methods, will be discussed and analyzed. Specifically, we cite global density changes due to mass accretion rate variations as the likely source of LLAGN flaring behavior, while double-Maxwellian electron distributions heated by magnetic reconnection may explain the high energy emissions of these accretion disks.

Hilburn, Guy L.

2012-01-01

208

Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

2010-01-01

209

Monte Carlo simulation study of spin transport in multilayer graphene with Bernal stacking

NASA Astrophysics Data System (ADS)

In this work, we model spin transport in multilayer graphene (MLG) stacks with Bernal (ABA) stacking using semi-classical Monte Carlo simulations and the results are compared to bi-layer graphene. Both the D'yakonov-Perel and Elliot-Yafet mechanisms for spin relaxation are considered for modeling purposes. Varying the number of layers alters the band structure of the MLG. We study the effect of the band structures in determining the spin relaxation lengths of the different multilayer graphene stacks. We observe that as the number of layers increases the spin relaxation length increases up to a maximum value for 16 layers and then stays the same irrespective of the number of layers. We explain this trend in terms of the changing band structures which affects the scattering rates of the spin carriers.

Misra, Soumya; Ghosh, Bahniman; Nandal, Vikas; Dubey, Lalit

2012-07-01

210

NASA Astrophysics Data System (ADS)

Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

2014-06-01

211

The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within +/-1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. PMID:19342258

Habib, B; Poumarede, B; Tola, F; Barthe, J

2010-01-01

212

Purpose: To determine detector-specific output correction factors,k{sub Q} {sub c{sub l{sub i{sub n}}}} {sub ,Q} {sub m{sub s{sub r}}} {sup f{sub {sup {sub c}{sub l}{sub i}{sub n}{sub {sup ,f{sub {sup {sub m}{sub s}{sub r}{sub ,}}}}}}}} in 6 MV small photon beams for air and liquid ionization chambers, silicon diodes, and diamond detectors from two manufacturers. Methods: Field output factors, defined according to the international formalism published byAlfonso et al. [Med. Phys. 35, 5179–5186 (2008)], relate the dosimetry of small photon beams to that of the machine-specific reference field; they include a correction to measured ratios of detector readings, conventionally used as output factors in broad beams. Output correction factors were calculated with the PENELOPE Monte Carlo (MC) system with a statistical uncertainty (type-A) of 0.15% or lower. The geometries of the detectors were coded using blueprints provided by the manufacturers, and phase-space files for field sizes between 0.5 × 0.5 cm{sup 2} and 10 × 10 cm{sup 2} from a Varian Clinac iX 6 MV linac used as sources. The output correction factors were determined scoring the absorbed dose within a detector and to a small water volume in the absence of the detector, both at a depth of 10 cm, for each small field and for the reference beam of 10 × 10 cm{sup 2}. Results: The Monte Carlo calculated output correction factors for the liquid ionization chamber and the diamond detector were within about ±1% of unity even for the smallest field sizes. Corrections were found to be significant for small air ionization chambers due to their cavity dimensions, as expected. The correction factors for silicon diodes varied with the detector type (shielded or unshielded), confirming the findings by other authors; different corrections for the detectors from the two manufacturers were obtained. The differences in the calculated factors for the various detectors were analyzed thoroughly and whenever possible the results were compared to published data, often calculated for different accelerators and using the EGSnrc MC system. The differences were used to estimate a type-B uncertainty for the correction factors. Together with the type-A uncertainty from the Monte Carlo calculations, an estimation of the combined standard uncertainty was made, assigned to the mean correction factors from various estimates. Conclusions: The present work provides a consistent and specific set of data for the output correction factors of a broad set of detectors in a Varian Clinac iX 6 MV accelerator and contributes to improving the understanding of the physics of small photon beams. The correction factors cannot in general be neglected for any detector and, as expected, their magnitude increases with decreasing field size. Due to the reduced number of clinical accelerator types currently available, it is suggested that detector output correction factors be given specifically for linac models and field sizes, rather than for a beam quality specifier that necessarily varies with the accelerator type and field size due to the different electron spot dimensions and photon collimation systems used by each accelerator model.

Benmakhlouf, Hamza, E-mail: hamza.benmakhlouf@karolinska.se [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Medical Physics, Karolinska University Hospital, SE-171 76 Stockholm, Sweden, and Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden); Sempau, Josep [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain)] [Institut de Tècniques Energètiques, Universitat Politècnica de Catalunya, Diagonal 647, E-08028, Barcelona (Spain); Andreo, Pedro [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)] [Department of Physics, Medical Radiation Physics, Stockholm University and Karolinska Institute, SE-171 76 Stockholm (Sweden)

2014-04-15

213

Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons

NASA Astrophysics Data System (ADS)

We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 ?m, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.

Mei, S.; Maurer, L. N.; Aksamija, Z.; Knezevic, I.

2014-10-01

214

NASA Astrophysics Data System (ADS)

Electron transport and energy relaxation in a 100-nm channel n+-n-n+ monolayer graphene diode were studied by using semiclassical Monte Carlo particle simulations. A diode with a conventional parabolic band and an identical geometry and scattering process was also analyzed in an attempt to confirm that the characteristic transport properties originated from the linear energy band structure. We took into account two scattering mechanisms: isotropic elastic scattering and inelastic phonon emission. The carrier velocity distributions in the two diodes show remarkable differences reflecting their band dispersions. Electron velocity in the monolayer graphene diode is high in the channel region and remains almost constant until the energy relaxation begins. Inelastic scattering does not reduce electron velocity so severely, whereas elastic scattering significantly decreases it through backscattering of hot electrons with high kinetic energy. Elastic scattering also degrades the ballisticity and the drain current; however, increasing the inelastic scattering offsets these effects. We found that elastic scattering should be suppressed to improve the performance of graphene devices.

Harada, Naoki; Awano, Yuji; Sato, Shintaro; Yokoyama, Naoki

2011-05-01

215

Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons

We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100??m, where it saturates at a value of 5800?W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600?K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.

Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)

2014-10-28

216

There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations. PMID:25162421

Ali, Fawaz; Waller, Ed

2014-10-01

217

MCNP: Photon benchmark problems

The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-09-01

218

BEAM is a general purpose EGS4 user code for simulating radiotherapy sources (Rogers et al. Med. Phys. 22, 503-524, 1995). The BEAM code is optimized by first minimizing unnecessary electron transport (a factor of 3 improvement in efficiency). The efficiency of the uniform bremsstrahlung splitting (UBS) technique is assessed and found to be 4 times more efficient. The Russian Roulette

Daryoush Sheikh-Bagheri

1999-01-01

219

Suppression of population transport and control of exciton distributions by entangled photons

Entangled photons provide an important tool for secure quantum communication, computing and lithography. Low intensity requirements for multi-photon processes make them idealy suited for minimizing damage in imaging applications. Here we show how their unique temporal and spectral features may be used in nonlinear spectroscopy to reveal properties of multiexcitons in chromophore aggregates. Simulations demostrate that they provide unique control tools for two-exciton states in the bacterial reaction centre of Blastochloris viridis. Population transport in the intermediate single-exciton manifold may be suppressed by the absorption of photon pairs with short entanglement time, thus allowing the manipulation of the distribution of two-exciton states. The quantum nature of the light is essential for achieving this degree of control, which cannot be reproduced by stochastic or chirped light. Classical light is fundamentally limited by the frequency-time uncertainty, whereas entangled photons have independent temporal and spectral characteristics not subjected to this uncertainty. PMID:23653194

Schlawin, Frank; Dorfman, Konstantin E.; Fingerhut, Benjamin P.; Mukamel, Shaul

2013-01-01

220

We have developed a "red blood cell (RBC)-photon simulator" to reveal optical propagation in prethrombus blood for various levels of RBC density and aggregation. The simulator investigates optical propagation in the prethrombus blood and will be applied to detect it noninvasively for thrombosis prevention in an earlier stage. In our simulator, Lambert-Beer's law is employed to simulate the absorption of RBCs with hemoglobin, while the Monte Carlo method is applied to simulate scattering through iterative calculations. One advantage of our simulator is that concentrations and distributions of RBCs can be arbitrarily chosen to exhibit the prethrombus, while conventional models cannot. Using the simulator, we found that various levels of RBC density and aggregation have different effects on the optical propagation of near-infrared response light in blood. The same different effects were acquired in in vitro experiments with 12 bovine blood samples, which were performed to evaluate the simulator. We measured RBC density using the clinical hematocrit index and RBC aggregation using activated whole blood clotting time. The experimental results correspond to the simulator results well. Therefore, we could show that our simulator exhibits the correct optical propagation for prethrombus blood and is applicable for the prethrombus detection using multiple detectors. PMID:21342854

Oshima, Shiori; Sankai, Yoshiyuki

2011-05-01

221

Purpose: To investigate the response of plastic scintillation detectors (PSDs) in a 6 MV photon beam of various field sizes using Monte Carlo simulations. Methods: Three PSDs were simulated: A BC-400 and a BCF-12, each attached to a plastic-core optical fiber, and a BC-400 attached to an air-core optical fiber. PSD response was calculated as the detector dose per unit water dose for field sizes ranging from 10x10 down to 0.5x0.5 cm{sup 2} for both perpendicular and parallel orientations of the detectors to an incident beam. Similar calculations were performed for a CC01 compact chamber. The off-axis dose profiles were calculated in the 0.5x0.5 cm{sup 2} photon beam and were compared to the dose profile calculated for the CC01 chamber and that calculated in water without any detector. The angular dependence of the PSDs' responses in a small photon beam was studied. Results: In the perpendicular orientation, the response of the BCF-12 PSD varied by only 0.5% as the field size decreased from 10x10 to 0.5x0.5 cm{sup 2}, while the response of BC-400 PSD attached to a plastic-core fiber varied by more than 3% at the smallest field size because of its longer sensitive region. In the parallel orientation, the response of both PSDs attached to a plastic-core fiber varied by less than 0.4% for the same range of field sizes. For the PSD attached to an air-core fiber, the response varied, at most, by 2% for both orientations. Conclusions: The responses of all the PSDs investigated in this work can have a variation of only 1%-2% irrespective of field size and orientation of the detector if the length of the sensitive region is not more than 2 mm long and the optical fiber stems are prevented from pointing directly to the incident source.

Wang, Lilie L. W.; Beddar, Sam [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

2011-03-15

222

Improved Hybrid Monte Carlo/n-Moment Transport Equations Model for the Polar Wind

NASA Astrophysics Data System (ADS)

In many space plasma problems (e.g. terrestrial polar wind, solar wind, etc.), the plasma gradually evolves from dense collision-dominated into rarified collisionless conditions. For decades, numerous attempts were made in order to address this type of problem using simulations based on one of two approaches. These approaches are: (1) the (fluid-like) Generalized Transport Equations, GTE, and (2) the particle-based Monte Carlo (MC) techniques. In contrast to the computationally intensive MC, the GTE approach can be considerably more efficient but its validity is questionable outside the collision-dominated region depending on the number of transport parameters considered. There have been several attempts to develop hybrid models that combine the strengths of both approaches. In particular, low-order GTE formulations were applied within the collision-dominated region, while an MC simulation was applied within the collisionless region and in the collisional-to-collisionless transition region. However, attention must be paid to assuring the consistency of the two approaches in the region where they are matched. Contrary to all previous studies, our model pays special attention to the ';matching' issue, and hence eliminates the discontinuities/inaccuracies associated with mismatching. As an example, we applied our technique to the Coulomb-Milne problem because of its relevance to the problem of space plasma flow from high- to low-density regions. We will compare the velocity distribution function and its moments (density, flow velocity, temperature, etc.) from the following models: (1) the pure MC model, (2) our hybrid model, and (3) previously published hybrid models. We will also consider a wide range of the test-to-background mass ratio.

Barakat, A. R.; Ji, J.; Schunk, R. W.

2013-12-01

223

NASA Astrophysics Data System (ADS)

This study examines variations of bone and mucosal doses with variable soft tissue and bone thicknesses, mimicking the oral or nasal cavity in skin radiation therapy. Monte Carlo simulations (EGSnrc-based codes) using the clinical kilovoltage (kVp) photon and megavoltage (MeV) electron beams, and the pencil-beam algorithm (Pinnacle3 treatment planning system) using the MeV electron beams were performed in dose calculations. Phase-space files for the 105 and 220 kVp beams (Gulmay D3225 x-ray machine), and the 4 and 6?MeV electron beams (Varian 21 EX linear accelerator) with a field size of 5 cm diameter were generated using the BEAMnrc code, and verified using measurements. Inhomogeneous phantoms containing uniform water, bone and air layers were irradiated by the kVp photon and MeV electron beams. Relative depth, bone and mucosal doses were calculated for the uniform water and bone layers which were varied in thickness in the ranges of 0.5-2 cm and 0.2-1 cm. A uniform water layer of bolus with thickness equal to the depth of maximum dose (dmax) of the electron beams (0.7 cm for 4 MeV and 1.5 cm for 6 MeV) was added on top of the phantom to ensure that the maximum dose was at the phantom surface. From our Monte Carlo results, the 4 and 6 MeV electron beams were found to produce insignificant bone and mucosal dose (<1%), when the uniform water layer at the phantom surface was thicker than 1.5 cm. When considering the 0.5 cm thin uniform water and bone layers, the 4 MeV electron beam deposited less bone and mucosal dose than the 6 MeV beam. Moreover, it was found that the 105 kVp beam produced more than twice the dose to bone than the 220 kVp beam when the uniform water thickness at the phantom surface was small (0.5 cm). However, the difference in bone dose enhancement between the 105 and 220 kVp beams became smaller when the thicknesses of the uniform water and bone layers in the phantom increased. Dose in the second bone layer interfacing with air was found to be higher for the 220 kVp beam than that of the 105 kVp beam, when the bone thickness was 1 cm. In this study, dose deviations of bone and mucosal layers of 18% and 17% were found between our results from Monte Carlo simulation and the pencil-beam algorithm, which overestimated the doses. Relative depth, bone and mucosal doses were studied by varying the beam nature, beam energy and thicknesses of the bone and uniform water using an inhomogeneous phantom to model the oral or nasal cavity. While the dose distribution in the pharynx region is unavailable due to the lack of a commercial treatment planning system commissioned for kVp beam planning in skin radiation therapy, our study provided an essential insight into the radiation staff to justify and estimate bone and mucosal dose.

Chow, James C. L.; Jiang, Runqing

2012-06-01

224

A Monte-Carlo Model of Neutral-Particle Transport in Diverted Plasmas

NASA Astrophysics Data System (ADS)

The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall-reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudocollision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum-transfer rates, energy-transfer rates, and wall-sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

1982-05-01

225

Monte Carlo model of neutral-particle transport in diverted plasmas

The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

1981-11-01

226

Monte-Carlo model of neutral-particle transport in diverted plasmas

The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall-reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudocollision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum-transfer rates, energy-transfer rates, and wall-sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

1982-05-01

227

penMesh--Monte Carlo radiation transport simulation in a triangle mesh geometry.

We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh. PMID:19435677

Badal, Andreu; Kyprianou, Iacovos; Banh, Diem Phuc; Badano, Aldo; Sempau, Josep

2009-12-01

228

Simulation for the Production of Technetium-99m Using Monte Carlo N-Particle Transport Code

NASA Astrophysics Data System (ADS)

The Monte Carlo N-Particle Transport Code (MCNP) is employed to simulate the radioisotope production process that leads to the creation of Technetium-99m (Tc-99m). Tc-99m is a common metastable nuclear isomer used in nuclear medicine tests and is produced from the gamma decay of Molybdenum-99 (Mo-99). Mo-99 is commonly produced from the fission of Uranium-235, a complicated process which is only performed at a limited number of facilities. Due to the age of these facilities, coupled with the critical importance of a steady flow of Mo-99, new methods of generating Mo-99 are being investigated. Current experiments demonstrate promising alternatives, one of which consists of the neutron activation of Molybdenum-98 (Mo-98), a naturally occurring element found in nature. Mo-98 has a small cross section (.13 barns), so investigations are also aimed at overcoming this natural obstacle for producing Tc-99m. The neutron activated Mo-98 becomes Mo-99 and subsequently decays into radioactive Tc-99m. The MCNP code is being used to examine the interactions between the particles in each of these situations, thus determining a theoretical threshold to maximize the reaction's efficiency. The simulation results will be applied to ongoing experiments at the PPPL, where the empirical data will be compared to predictions from the MCNP code.

Kaita, Courtney; Gentile, Charles; Zelenty, Jennifer

2010-11-01

229

Monte Carlo Assessment of Time Dependent Spectral Indexes for Benchmarking Neutron Transport in Iron

NASA Astrophysics Data System (ADS)

Monte Carlo simulations (MCNP4C2) were performed to assess the ability to benchmark neutron transport calculations in iron using a pulsed-neutron slowing-down experiment. Specifically, calculations were performed to obtain the time dependent neutron energy spectra inside a 1 × 1 × 1 m natural iron moderator that is driven by a 14-MeV pulsed neutron source (simulating a pulsed D-T neutron generator). At various time intervals after the pulse, the energy spectrum was tallied and used to estimate the integral time-dependent reaction rates in 235U, 238U, 237Np, and 239Pu fission detectors that were located inside the moderator. The results show that within 0.05 ?s after the pulse, the average energy of the neutrons drops below 800 keV. Therefore, the threshold detectors (237Np, and 238U) can be useful at early times, while the fissile detectors (235U and 239Pu) can be utilized throughout the experiment. For these detectors, the time dependent reaction rates and spectral indexes (235U/239Pu, 237Np/239Pu, and 238U/239Pu) are developed and discussed.

Hawari, Ayman I.; Adams, James M.

2003-06-01

230

NASA Astrophysics Data System (ADS)

In three-dimensional (3-D) modeling of light transport in heterogeneous biological structures using the Monte Carlo (MC) approach, space is commonly discretized into optically homogeneous voxels by a rectangular spatial grid. Any round or oblique boundaries between neighboring tissues thus become serrated, which raises legitimate concerns about the realism of modeling results with regard to reflection and refraction of light on such boundaries. We analyze the related effects by systematic comparison with an augmented 3-D MC code, in which analytically defined tissue boundaries are treated in a rigorous manner. At specific locations within our test geometries, energy deposition predicted by the two models can vary by 10%. Even highly relevant integral quantities, such as linear density of the energy absorbed by modeled blood vessels, differ by up to 30%. Most notably, the values predicted by the customary model vary strongly and quite erratically with the spatial discretization step and upon minor repositioning of the computational grid. Meanwhile, the augmented model shows no such unphysical behavior. Artifacts of the former approach do not converge toward zero with ever finer spatial discretization, confirming that it suffers from inherent deficiencies due to inaccurate treatment of reflection and refraction at round tissue boundaries.

Majaron, Boris; Milani?, Matija; Premru, Jan

2015-01-01

231

Monte Carlo Study of Fetal Dosimetry Parameters for 6 MV Photon Beam

Because of the adverse effects of ionizing radiation on fetuses, prior to radiotherapy of pregnant patients, fetal dose should be estimated. Fetal dose has been studied by several authors in different depths in phantoms with various abdomen thicknesses (ATs). In this study, the effect of maternal AT and depth in fetal dosimetry was investigated, using peripheral dose (PD) distribution evaluations. A BEAMnrc model of Oncor linac using out of beam components was used for dose calculations in out of field border. A 6 MV photon beam was used to irradiate a chest phantom. Measurements were done using EBT2 radiochromic film in a RW3 phantom as abdomen. The followings were measured for different ATs: Depth PD profiles at two distances from the field's edge, and in-plane PD profiles at two depths. The results of this study show that PD is depth dependent near the field's edge. The increase in AT does not change PD depth of maximum and its distribution as a function of distance from the field's edge. It is concluded that estimating the maximum fetal dose, using a flat phantom, i.e., without taking into account the AT, is possible. Furthermore, an in-plane profile measured at any depth can represent the dose variation as a function of distance. However, in order to estimate the maximum PD the depth of Dmax in out of field should be used for in-plane profile measurement. PMID:24083135

Atarod, Maryam; Shokrani, Parvaneh

2013-01-01

232

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6?MV beam was used to generate the PSLs for 6?MV beams. In a simulation study, we commissioned a Siemens 6?MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D ?-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2?mm criteria and from 32.22 to 89.65% for 1%/1?mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The passing rate of the ?-index test within the 10% isodose line of the prescription dose was improved from 92.73 to 99.70% and from 82.16 to 96.73% for 2%/2?mm and 1%/1?mm criteria, respectively. Real clinical data measured from Varian, Siemens, and Elekta linear accelerators were also used to validate our commissioning method and a similar level of accuracy was achieved.

Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

2014-10-01

233

The physics of electron transport in Si and GaAs is investigated with use of a Monte Carlo technique which improves the ``state-of-the-art'' treatment of high-energy carrier dynamics. (1) The semiconductor is modeled beyond the effective-mass approximation by using the band structure obtained from empirical-pseudopotential calculations. (2) The electron-phonon, electron-impurity, and electron-electron scattering rates are computed in a way consistent with

Massimo V. Fischetti; Steven E. Laux

1988-01-01

234

Monte Carlo simulation of gas Cerenkov detectors

Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier.

Mack, J.M.; Jain, M.; Jordan, T.M.

1984-01-01

235

The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions. PMID:19934489

Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar

2009-12-21

236

We present a deviational Monte Carlo method for solving the Boltzmann equation for phonon transport subject to the linearized ab initio 3-phonon scattering operator. Phonon dispersion relations and transition rates are ...

Landon, Colin Donald

2014-01-01

237

A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)

Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

2013-07-01

238

To develop a primary standard for 192Ir sources, the basic science on which this standard is based, i.e., Spencer-Attix cavity theory, must be established. In the present study Monte Carlo techniques are used to investigate the accuracy of this cavity theory for photons in the energy range from 20 to 1300 keV, since it is usually not applied at energies below that of 137Cs. Ma and Nahum [Phys. Med. Biol. 36, 413-428 (1991)] found that in low-energy photon beams the contribution from electrons caused by photons interacting in the cavity is substantial. For the average energy of the 192Ir spectrum they found a departure from Bragg-Gray conditions of up to 3% caused by photon interactions in the cavity. When Monte Carlo is used to calculate the response of a graphite ion chamber to an encapsulated 192Ir source it is found that it differs by less than 0.3% from the value predicted by Spencer-Attix cavity theory. Based on these Monte Carlo calculations, for cavities in graphite it is concluded that the Spencer-Attix cavity theory with delta = 10 keV is applicable within 0.5% for photon energies at 300 keV or above despite the breakdown of the assumption that there is no interaction of photons within the cavity. This means that it is possible to use a graphite ion chamber and Spencer-Attix cavity theory to calibrate an 192Ir source. It is also found that the use of delta related to the mean chord length instead of delta = 10 keV improves the agreement with Spencer-Attix cavity theory at 60Co from 0.2% to within 0.1% of unity. This is at the level of accuracy of which the Monte Carlo code EGSnrc calculates ion chamber responses. In addition, it is shown that the effects of other materials, e.g., insulators and holders, have a substantial effect on the ion chamber response and should be included in the correction factors for a primary standard of air kerma. PMID:10984227

Borg, J; Kawrakow, I; Rogers, D W; Seuntjens, J P

2000-08-01

239

Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm³ crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a range of treatments and materials attached to the surface. PMID:24694727

Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R

2014-04-21

240

Some chemotherapy drugs contain a high Z element in their structure that can be used for tumour dose enhancement in radiotherapy. In the present study, dose enhancement factors (DEFs) by cisplatin and titanocene dichloride agents in brachytherapy were quantified based on Monte Carlo simulation. Six photon emitting brachytherapy sources were simulated and their dose rate constant and radial dose function were determined and compared with published data. Dose enhancement factor was obtained for 1, 3 and 5 % concentrations of cisplatin and titanocene dichloride chemotherapy agents in a tumour, in soft tissue phantom. The results of the dose rate constant and radial dose function showed good agreement with published data. Our results have shown that depending on the type of chemotherapy agent and brachytherapy source, DEF increases with increasing chemotherapy drug concentration. The maximum in-tumour averaged DEF for cisplatin and titanocene dichloride are 4.13 and 1.48, respectively, reached with 5 % concentrations of the agents, and (125)I source. Dose enhancement factor is considerably higher for both chemotherapy agents with (125)I, (103)Pd and (169)Yb sources, compared to (192)Ir, (198)Au and (60)Co sources. At similar concentrations, dose enhancement for cisplatin is higher compared with titanocene dichloride. Based on the results of this study, combination of brachytherapy and chemotherapy with agents containing a high Z element resulted in higher radiation dose to the tumour. Therefore, concurrent use of chemotherapy and brachytherapy with high atomic number drugs can have the potential benefits of dose enhancement. However, more preclinical evaluations in this area are necessary before clinical application of this method. PMID:24706342

Yahya Abadi, Akram; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Knaup, Courtney

2014-06-01

241

A rigorous treatment of energy deposition in a Monte Carlo transport calculation, including coupled transport of all secondary and tertiary radiations, increases the computational cost of a simulation dramatically, making fully-coupled heating impractical for many large calculations, such as 3-D analysis of nuclear reactor cores. However, in some cases, the added benefit from a full-fidelity energy-deposition treatment is negligible, especially considering the increased simulation run time. In this paper we present a generalized framework for the in-line calculation of energy deposition during steady-state Monte Carlo transport simulations. This framework gives users the ability to select among several energy-deposition approximations with varying levels of fidelity. The paper describes the computational framework, along with derivations of four energy-deposition treatments. Each treatment uses a unique set of self-consistent approximations, which ensure that energy balance is preserved over the entire problem. By providing several energy-deposition treatments, each with different approximations for neglecting the energy transport of certain secondary radiations, the proposed framework provides users the flexibility to choose between accuracy and computational efficiency. Numerical results are presented, comparing heating results among the four energy-deposition treatments for a simple reactor/compound shielding problem. The results illustrate the limitations and computational expense of each of the four energy-deposition treatments. (authors)

Griesheimer, D. P. [Bertis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Stedry, M. H. [Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301 (United States)

2013-07-01

242

NASA Astrophysics Data System (ADS)

Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP ha

Bergmann, Ryan

243

The stochastic Galerkin method (SGM) is an intrusive technique for propagating data uncertainty in physical models. The method reduces the random model to a system of coupled deterministic equations for the moments of stochastic spectral expansions of result quantities. We investigate solving these equations using the Monte Carlo technique. We compare the efficiency with brute-force Monte Carlo evaluation of uncertainty, the non-intrusive stochastic collocation method (SCM), and an intrusive Monte Carlo implementation of the stochastic collocation method. We also describe the stability limitations of our SGM implementation. (authors)

Franke, B. C. [Sandia National Laboratories, Albuquerque, NM 87185 (United States); Prinja, A. K. [Department of Chemical and Nuclear Engineering, University of New Mexico, Albuquerque, NM 87131 (United States)

2013-07-01

244

Donut modes and photonic hollow fibers: a possible scheme for atom transport

Bragg fibers are a specific class of photonic bandgap fibers that have the capacity to be optimized for low-loss transmission of ``donut'' modes. This ability makes these fibers attractive as possible tool for atom optics. One example would be to transport neutral atoms through harsh environments. This would be possible by co-propagating a blue-detuned donut mode with the atoms through

Sandip Mitra; J. Smith; N. Chattrapiban; I. Arakelyan; W. T. Hill III

2006-01-01

245

Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared to MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.

Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch

2013-10-01

246

ITS Version 4.0: Electron/photon Monte Carlo transport codes

The current publicly released version of the Integrated TIGER Series (ITS), Version 3.0, has been widely distributed both domestically and internationally, and feedback has been very positive. This feedback as well as our own experience have convinced us to upgrade the system in order to honor specific user requests for new features and to implement other new features that will improve the physical accuracy of the system and permit additional variance reduction. This presentation we will focus on components of the upgrade that (1) improve the physical model, (2) provide new and extended capabilities to the three-dimensional combinatorial-geometry (CG) of the ACCEPT codes, and (3) permit significant variance reduction in an important class of radiation effects applications.

Halbleib, J.A,; Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

1995-07-01

247

Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)

Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology\\

Oyeon Kum

2004-01-01

248

Status of JAERI’s Monte Carlo Code MVP for Neutron and Photon Transport Problems

NASA Astrophysics Data System (ADS)

The special features of MVP are (1) vectorization and parallelization, (2) multiple lattice capability and statistical geometry model, (3) probability table method for unresolved resonance, (4) calculation at arbitrary temperatures, (5) depletion calculation, (6) perturbation calculation for eigenvalue (K eff) problem, and so on.

Mori, T.; Okumura, K.; Nagaya, Y.

249

Unified single-photon and single-electron counting statistics: from cavity-QED to electron transport

A key ingredient of cavity quantum-electrodynamics (QED) is the coupling between the discrete energy levels of an atom and photons in a single-mode cavity. The addition of periodic ultra-short laser pulses allows one to use such a system as a source of single photons; a vital ingredient in quantum information and optical computing schemes. Here, we analyze and ``time-adjust'' the photon-counting statistics of such a single-photon source, and show that the photon statistics can be described by a simple `transport-like' non-equilibrium model. We then show that there is a one-to-one correspondence of this model to that of non-equilibrium transport of electrons through a double quantum dot nanostructure. Then we prove that the statistics of the tunnelling electrons is equivalent to the statistics of the emitted photons. This represents a unification of the fields of photon counting statistics and electron transport statistics. This correspondence empowers us to adapt several tools previously used for detecting quantum behavior in electron transport systems (e.g., super-Poissonian shot noise, and an extension of the Leggett-Garg inequality) to single-photon-source experiments.

Neill Lambert; Yueh-Nan Chen; Franco Nori

2010-08-26

250

NASA Astrophysics Data System (ADS)

Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.

Sakota, Daisuke; Takatani, Setsuo

2012-05-01

251

NASA Astrophysics Data System (ADS)

An Elekta SL-25 medical linear accelerator (Elekta Oncology Systems, Crawley, UK) has been modelled using Monte Carlo simulations with the photon flattening filter removed. It is hypothesized that intensity modulated radiation therapy (IMRT) treatments may be carried out after the removal of this component despite it's criticality to standard treatments. Measurements using a scanning water phantom were also performed after the flattening filter had been removed. Both simulated and measured beam profiles showed that dose on the central axis increased, with the Monte Carlo simulations showing an increase by a factor of 2.35 for 6 MV and 4.18 for 10 MV beams. A further consequence of removing the flattening filter was the softening of the photon energy spectrum leading to a steeper reduction in dose at depths greater than the depth of maximum dose. A comparison of the points at the field edge showed that dose was reduced at these points by as much as 5.8% for larger fields. In conclusion, the greater photon fluence is expected to result in shorter treatment times, while the reduction in dose outside of the treatment field is strongly suggestive of more accurate dose delivery to the target.

Ishmael Parsai, E.; Pearson, David; Kvale, Thomas

2007-08-01

252

Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09

253

NASA Astrophysics Data System (ADS)

This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 ?k in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

2014-06-01

254

Monte Carlo code for fast hydrogen atom transport and generating of excessively Doppler broadened profiles based on the collision model is presented. Results for the initial monoenergetic atom beam and for a more realistic energy distribution of H atoms are reported. Line profiles obtained from the simulation are compared to our experimentally obtained data. Initial energy distribution for atoms is approximately calculated from the measured line profiles while the initial angle distribution was taken to be cosine. Balmer alpha intensity was found to exponentially decay in the negative glow region, which concurs with the experimental results. These agreements between the simulation and experiment support the collision model for excessive line broadening.

Cvetanovic, N. [Faculty of Transport and Traffic Engineering, University of Belgrade, Vojvode Stepe 305, Belgrade (Serbia); Obradovic, B. M.; Kuraica, M. M. [Faculty of Physics, University of Belgrade, P.O. Box 368, Belgrade (Serbia)

2009-02-15

255

Enhanced photon-assisted spin transport in a quantum dot attached to ferromagnetic leads

NASA Astrophysics Data System (ADS)

Time-dependent transport in quantum dot system (QDs) has received significant attention due to a variety of new quantum physical phenomena emerging in transient time scale.[1] In the present work [2] we investigate real-time dynamics of spin-polarized current in a quantum dot coupled to ferromagnetic leads in both parallel and antiparallel alignments. While an external bias voltage is taken constant in time, a gate terminal, capacitively coupled to the quantum dot, introduces a periodic modulation of the dot level. Using non equilibrium Green's function technique we find that spin polarized electrons can tunnel through the system via additional photon-assisted transmission channels. Owing to a Zeeman splitting of the dot level, it is possible to select a particular spin component to be photon-transferred from the left to the right terminal, with spin dependent current peaks arising at different gate frequencies. The ferromagnetic electrodes enhance or suppress the spin transport depending upon the leads magnetization alignment. The tunnel magnetoresistance also attains negative values due to a photon-assisted inversion of the spin-valve effect. [1] F. M. Souza, Phys. Rev. B 76, 205315 (2007). [2] F. M. Souza, T. L. Carrara, and E. Vernek, Phys. Rev. B 84, 115322 (2011).

Souza, Fabricio M.; Carrara, Thiago L.; Vernek, Edson

2012-02-01

256

The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.

Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V. [University of Michigan, Ann Arbor, Michigan 48109 and University of Nebraska Medical Center, Omaha, Nebraska 68198-7521 (United States) and University of Michigan, Ann Arbor, Michigan 48109 (United States) and Ottawa Hospital Regional Cancer Center, Ottawa, Ontario K1H 1C4 (Canada); University of California, Los Angeles, Callifornia 90095 (United States) and Mayo Clinic Scottsdale, Scottsdale, Arizona 85259 (United States) and University of California, San Francisco, California 94143 (United States); National Research Council of Canada, Ottawa, Ontario K1A 0R6 (Canada); Stanford University Cancer Center, Stanford, California 94305-5847 (United States); University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States); Carleton University, Ottawa, Ontario K1S 5B6 (Canada); McGill University, Montreal, Quebec H3G 1A4 (Canada); Regional Cancer Center, Erie, Pennsylvania 16505 (United States); Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

2007-12-15

257

We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

Schaefer, C.; Jansen, A. P. J. [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

2013-02-07

258

The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations.

Feng, Y.; Sardei, F.; Kisslinger, J. [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Euratom Association, D-17491 Greifswald (Germany); Max-Planck-Institut fuer Plasmaphysik, Euratom Association, D-85748 Garching (Germany)

2005-05-15

259

NASA Astrophysics Data System (ADS)

The Monte Carlo technique is applied to simulate the processes of the cascade relaxation of gaseous boron at atomic density of 2.5 × 1022 m-3 ionized by photons with the energies of 0.7-25 Ryd passing through a cylindrical interaction zone along its axis. The trajectories of electrons are simulated based on photoionization and electron-impact ionization cross sections calculated in the one-electron configuration-average Pauli-Fock approximation. Numbers of electrons and photons leaving the interaction zone per one initial photoionization, their energy spectra, the energy transferred to the medium and the probabilities of final ion formations are shown to change noticeably as the incident photon energy is scanned through boron atom ionization thresholds. These variations can be explained only if secondary electron-impact-produced processes are considered. The density of secondary events decreases when going from the zone axis to its border, and the profiles of the density along the radial direction are found to be similar for all the initial exciting photon energies.

Brühl, S.; Kochur, A. G.

2012-07-01

260

We generalize a simple Monte Carlo (MC) model for dilute gases to consider the transport behavior of positrons and electrons in Percus-Yevick model liquids under highly non-equilibrium conditions, accounting rigorously for coherent scattering processes. The procedure extends an existing technique [Wojcik and Tachiya, Chem. Phys. Lett. 363, 3--4 (1992)], using the static structure factor to account for the altered anisotropy of coherent scattering in structured material. We identify the effects of the approximation used in the original method, and develop a modified method that does not require that approximation. We also present an enhanced MC technique that has been designed to improve the accuracy and flexibility of simulations in spatially-varying electric fields. All of the results are found to be in excellent agreement with an independent multi-term Boltzmann equation solution, providing benchmarks for future transport models in liquids and structured systems.

Tattersall, W J; Boyle, G J; White, R D

2015-01-01

261

NASA Astrophysics Data System (ADS)

In this work, we have used semi-classical Monte Carlo simulations to model spin transport in trilayer graphene (TLG) with ABA as well as ABC stacking. We have taken into consideration both the D'yakonov-Perel (DP) and Elliot-Yafet (EY) mechanisms of spin relaxation for modeling purposes. The two different stacking orders, ABA and ABC, have different band-structures, and we have studied the effect of the change in band structure on spin transport. Further, we have compared these results with bilayer graphene and single layer graphene and tried to explain the differences in the spin relaxation lengths in terms of band structure. We observe that TLG with ABC stacking exhibits a significantly higher spin relaxation length than TLG with ABA stacking.

Ghosh, Bahniman; Misra, Soumya

2012-10-01

262

for predicting molecule-specific ionization, excitation, and scattering cross sections in the very low energy regime that can be applied in a condensed history Monte Carlo track-structure code. The present methodology begins with the calculation of a solution...

Madsen, Jonathan R

2013-08-13

263

Use of single scatter electron monte carlo transport for medical radiation sciences

The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

Svatos, Michelle M. (Oakland, CA)

2001-01-01

264

NASA Astrophysics Data System (ADS)

We present a perturbative approach to derive the semiclassical equations of motion for the two-dimensional electron dynamics under the simultaneous presence of static electric and magnetic fields, where the quantized Hall conductance is known to be directly related to the topological properties of translationally invariant magnetic Bloch bands. In close analogy to this approach, we develop a perturbative theory of two-dimensional photonic transport in gyrotropic photonic crystals to mimic the physics of quantum Hall systems. We show that a suitable permittivity grading of a gyrotropic photonic crystal is able to simulate the simultaneous presence of analog electric and magnetic field forces for photons, and we rigorously derive the topology-related term in the equation for the electromagnetic energy velocity that is formally equivalent to the electronic case. A possible experimental configuration is proposed to observe a bulk photonic analog to the quantum Hall physics in graded gyromagnetic photonic crystals.

Esposito, Luca; Gerace, Dario

2013-07-01

265

For the evaluation of gamma-ray dose rates around the duct penetrations after shutdown of nuclear fusion reactor, the calculation method is proposed with an application of the Monte Carlo neutron and decay gamma-ray transport calculation. For the radioisotope production rates during operation, the Monte Carlo calculation is conducted by the modification of the nuclear data library replacing a prompt gamma-ray

Satoshi SATO; Hiromasa IIDA; Takeo NISHITANI

2002-01-01

266

The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. PMID:23877204

Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

2013-08-21

267

Scanning microphotolysis is a method that permits the user to select, within the scanning field of a confocal microscope, areas of arbitrary geometry for photobleaching or photoactivation. Two-photon absorption, by contrast, confers on laser scanning microscopy a true spatial selectivity by restricting excitation to very small focal volumes. In the present study the two methods were combined by complementing a laser scanning microscope with both a fast programmable optical switch and a titan sapphire laser. The efficiency and accuracy of fluorescence photobleaching induced by two-photon absorption were determined using fluorescein-containing polyacrylamide gels. At optimal conditions a single scan was sufficient to reduce the gel fluorescence by approximately 40%. Under these conditions the spatial accuracy of photobleaching was 0.5 +/- 0.1 micron in the lateral (x.y) and 3.5 +/- 0.5 micron in the axial (z) direction, without deconvolution accounting for the optical resolution. Deconvolution improved the accuracy values by approximately 30%. The method was applied to write complex three-dimensional patterns into thick gels by successively scanning many closely spaced layers, each according to an individual image mask. Membrane transport was studied in a model tissue consisting of human erythrocyte ghosts carrying large transmembrane pores and packed into three-dimensional arrays. Upon equilibration with a fluorescent transport substrate single ghosts could be selectively photobleached and the influx of fresh transport substrate be monitored. The results suggest that two-photon scanning microphotolysis provides new possibilities for the optical analysis and manipulation of both technical and biological microsystems. PMID:8801360

Kubitscheck, U; Tschödrich-Rotter, M; Wedekind, P; Peters, R

1996-06-01

268

The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6-15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in 'sub-equilibrium' small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for X?, the distance 'upstream' from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: X?(emp) ? 0.5R(csda)(E?(0)), where E?(0) is the mean initial secondary electron energy. These X?(emp) agree well with 'exact' MC-derived values for photon energies from 5-25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV-25?MeV photons in the three materials, showing close agreement with the MC-derived values. PMID:25548933

Kumar, Sudhir; Deshpande, Deepak D; Nahum, Alan E

2015-01-21

269

NASA Astrophysics Data System (ADS)

The relationships between D, K and Kcol are of fundamental importance in radiation dosimetry. These relationships are critically influenced by secondary electron transport, which makes Monte-Carlo (MC) simulation indispensable; we have used MC codes DOSRZnrc and FLURZnrc. Computations of the ratios D/K and D/Kcol in three materials (water, aluminum and copper) for large field sizes with energies from 50?keV to 25?MeV (including 6–15?MV) are presented. Beyond the depth of maximum dose D/K is almost always less than or equal to unity and D/Kcol greater than unity, and these ratios are virtually constant with increasing depth. The difference between K and Kcol increases with energy and with the atomic number of the irradiated materials. D/K in ‘sub-equilibrium’ small megavoltage photon fields decreases rapidly with decreasing field size. A simple analytical expression for \\overline{X} , the distance ‘upstream’ from a given voxel to the mean origin of the secondary electrons depositing their energy in this voxel, is proposed: {{\\overline{X}}\\text{emp}}? 0.5{{R}\\text{csda}}(\\overline{{{E}0}}) , where \\overline{{{E}0}} is the mean initial secondary electron energy. These {{\\overline{X}}\\text{emp}} agree well with ‘exact’ MC-derived values for photon energies from 5–25?MeV for water and aluminum. An analytical expression for D/K is also presented and evaluated for 50?keV–25?MeV photons in the three materials, showing close agreement with the MC-derived values.

Kumar, Sudhir; Deshpande, Deepak D.; Nahum, Alan E.

2015-01-01

270

Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii) suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.

Poludniowski, Gavin G. [Joint Department of Physics, Division of Radiotherapy and Imaging, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT, United Kingdom and Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom); Evans, Philip M. [Centre for Vision Speech and Signal Processing (CVSSP), Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

2013-04-15

271

Purpose: The purpose of this work was to evaluate the absorbed dose to Al{sub 2}O{sub 3} dosimeter at various depths of water phantom in radiotherapy photon beams by Monte Carlo simulation and evaluate the beam quality dependence. Methods: The simulations were done using EGSnrc. The cylindrical Al{sub 2}O{sub 3} dosimeter ({Phi}4 mmx1 mm) was placed at the central axis of the water phantom ({Phi}16 cmx16 cm) at depths between 0.5 and 8 cm. The incident beams included monoenergetic photon beams ranging from 1 to 18 MeV, {sup 60}Co {gamma} beams, Varian 6 MV beams using phase space files based on a full simulation of the linac, and Varian beams between 4 and 24 MV using Mohan's spectra. The absorbed dose to the dosimeter and the water at the corresponding position in the absence of the dosimeter, as well as absorbed dose ratio factor f{sub md}, was calculated. Results: The results show that f{sub md} depends obviously on the photon energy at the shallow depths. However, as the depth increases, the change in f{sub md} becomes small, beyond the buildup region, the maximum discrepancy of f{sub md} to the average value is not more than 1%. Conclusions: These simulation results confirm the use of Al{sub 2}O{sub 3} dosimeter in radiotherapy photon beams and clearly indicate that more attention should be paid when using such a dosimeter in the buildup region of high-energy radiotherapy photon beams.

Chen Shaowen; Wang Xuetao; Chen Lixin; Tang Qiang; Liu Xiaowei [School of Physics Science and Engineering, Sun Yat-Sen University, Guangzhou 510275 (China) and School of Electron Engineering, Dongguan University of Technology, Dongguan 523808 (China); Guangdong Province Hospital of TCM, Guangzhou 510120 (China); Cancer Center of Sun Yat-Sen University, Guangzhou 510060 (China); School of Physics Science and Engineering, Sun Yat-Sen University, Guangzhou 510275 (China)

2009-10-15

272

NSDL National Science Digital Library

In this activity using an open space and a thick rope, students simulate the movement of photons from the Sun. The resource is part of the teacher's guide accompanying the video, NASA Why Files: The Case of the Mysterious Red Light. Lesson objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide.

2012-08-03

273

The potential use of lead and tungsten pinhole inserts for high-resolution SPECT imaging of intratumor activity in I-131 radioimmunotherapy was investigated using experimental point source measurements and photon transport simulations. I-131 imaging is challenging because the primary photon emission is at 364 keV and penetration through the insert near the pinhole aperture is significant. Point source response functions (PSRF's) for

Mark F. Smith; Ronald J. Jaszczak; Huili Wang; Jianying Li

1997-01-01

274

Monte Carlo simulation studies of spin transport in graphene armchair nanoribbons

NASA Astrophysics Data System (ADS)

The research in the area of spintronics is gaining momentum due to the promise spintronics based devices have shown. Since spin degree of freedom of an electron is used to store and process information, spintronics can provide numerous advantages over conventional electronics by providing new functionalities. In this article, we study spin relaxation in graphene nanoribbons (GNR) of armchair type by employing semiclassical Monte Carlo approach. D'yakonov-Perel' relaxation due to structural inversion asymmetry (Rashba spin-orbit coupling) and Elliott-Yafet (EY) relaxation cause spin dephasing in armchair graphene nanoribbons. We investigate spin relaxation in ?-,?- and ?-armchair GNR with varying width and temperature.

Salimath, Akshay Kumar; Ghosh, Bahniman

2014-10-01

275

NASA Technical Reports Server (NTRS)

Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

1990-01-01

276

Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

Shinn, J.L.; Wilson, J.W.; Nealy, J.E.; Cucinotta, F.A.

1990-10-01

277

Monte Carlo Monte Carlo at Work by Gary D. Doolen and John Hendricks E very second nearly 10,000,000,000 "random" numbers are being generated on computers around the world for Monte Carlo solutions to problems hundreds of full-time careers invested in the fine art of generating Monte Carlo solutions--a livelihood

278

Simulation by Monte Carlo of the X ray Transport in PIN Type a-Si:H Radiation Detectors

NASA Astrophysics Data System (ADS)

Low energy X rays usually employed in mammography, were transported in hydrogenated amorphous silicon PIN diodes using MCNP-4C system code based on Monte Carlo simulation. The deposited energy distribution in these devices, useful as radiation detectors in medical imaging applications, was evaluated for 7-50 ?m thick intrinsic layers. The energy spectrum in different depths of the intrinsic layer shows a lineal increase of the deposited energy, and near the metal electrodes this increase is significantly higher by an order of magnitude. The influence of the material and geometry of the top electrode on the energy deposited inside the intrinsic layer, as well as the effect of the addition of the passivation layer are analysed in the text.

Shtejer, K.; Leyva, A.; Cruz, C.; Moreira, L.

2004-09-01

279

Monte Carlo Study of Thermal Transport of Frequency and Direction Dependent Reflecting

Starr [1] found that copper/cuprous oxide systems showed thermal as well as electrical rectifi- cation the carbon and boron nitride nanotubes. Through phonon transport simulations we provide theoret- ical

Walker, D. Greg

280

Several investigators have pointed out that electron and neutron contamination from high-energy photon beams are clinically important. The aim of this study is to assess electron and neutron contamination production by various prostheses in a high-energy photon beam of a medical linac. A 15 MV Siemens PRIMUS linac was simulated by MCNPX Monte Carlo (MC) code and the results of percentage depth dose (PDD) and dose profile values were compared with the measured data. Electron and neutron contaminations were calculated on the beam's central axis for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti hip prostheses through MC simulations. Dose increase factor (DIF) was calculated as the ratio of electron (neutron) dose at a point for 10 × 10 cm² field size in presence of prosthesis to that at the same point in absence of prosthesis. DIF was estimated at different depths in a water phantom. Our MC-calculated PDD and dose profile data are in good agreement with the corresponding measured values. Maximum dose increase factor for electron contamination for Co-Cr-Mo, stainless steel, Ti-alloy, and Ti prostheses were equal to 1.18, 1.16, 1.16, and 1.14, respectively. The corresponding values for neutron contamination were respectively equal to: 184.55, 137.33, 40.66, and 43.17. Titanium-based prostheses are recommended for the orthopedic practice of hip junction replacement. When treatment planning for a patient with hip prosthesis is performed for a high-energy photon beam, attempt should be made to ensure that the prosthesis is not exposed to primary photons. PMID:24036859

Bahreyni Toossi, Mohammad Taghi; Behmadi, Marziyeh; Ghorbani, Mahdi; Gholamhosseinian, Hamid

2013-01-01

281

NASA Astrophysics Data System (ADS)

Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad

2015-02-01

282

Monte Carlo simulation on the neutron transport of the Chiang Mai TOF-Facility

Fast neutrons are employed in many nuclear techniques used by science and technology. Usually fast neutrons are generated in a nuclear reaction process using accelerated light ions. After creation, the neutrons may scatter inside the target, the target holder or in the beam pipe. This background of neutrons, which have scattered in the target surroundings, is usually neglected when working with a neutron beam and a gaussian energy distribution is used for further calculations. Using the Monte Carlo method we attempted to determine the magnitude of this background of scattered neutrons. In a further investigation we simulated the complete environment of the Fast Neutron Research Facility experimental hall and the collimating system of our time-of-flight (TOF) spectrometer to get an estimation of the quality of the collimating system. The data obtained are presented and compared with some recent experimental results.

Kobus, H.; Vilaithong, T.; Pairsuwan, W.; Singkarat, S. [Chiang Mai Univ. (Thailand)

1994-12-31

283

Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.

Ding, D.; Chen, X.; Minnich, A. J., E-mail: aminnich@caltech.edu [Division of Engineering and Applied Science, California Institute of Technology, Pasadena, California 91125 (United States)

2014-04-07

284

NASA Astrophysics Data System (ADS)

Recently, a pump beam size dependence of thermal conductivity was observed in Si at cryogenic temperatures using time-domain thermal reflectance (TDTR). These observations were attributed to quasiballistic phonon transport, but the interpretation of the measurements has been semi-empirical. Here, we present a numerical study of the heat conduction that occurs in the full 3D geometry of a TDTR experiment, including an interface, using the Boltzmann transport equation. We identify the radial suppression function that describes the suppression in heat flux, compared to Fourier's law, that occurs due to quasiballistic transport and demonstrate good agreement with experimental data. We also discuss unresolved discrepancies that are important topics for future study.

Ding, D.; Chen, X.; Minnich, A. J.

2014-04-01

285

Characterization of photonic bandgap fiber for high-power narrow-linewidth optical transport

NASA Astrophysics Data System (ADS)

An investigation of the use of hollow-core photonic bandgap (PBG) fiber to transport high-power narrow-linewidth light is performed. In conventional fiber the main limitation in this case is stimulated Brillouin scattering (SBS) but in PBG fiber the overlap between the optical intensity and the silica that hosts the acoustic phonons is reduced. In this paper we show this should increase the SBS threshold to the multi-kW level even when including the non-linear interaction with the air in the core. A full model and experimental measurement of the SBS spectra is presented, including back-scatter into other optical modes besides the fundamental, and some of the issues of coupling high power into hollow-core fibers are discussed.

Bennett, Charlotte R.; Jones, David C.; Smith, Mark A.; Scott, Andrew M.; Lyngsoe, Jens K.; Jakobsen, Christian

2014-03-01

286

Galerkin-based meshless methods for photon transport in the biological tissue.

As an important small animal imaging technique, optical imaging has attracted increasing attention in recent years. However, the photon propagation process is extremely complicated for highly scattering property of the biological tissue. Furthermore, the light transport simulation in tissue has a significant influence on inverse source reconstruction. In this contribution, we present two Galerkin-based meshless methods (GBMM) to determine the light exitance on the surface of the diffusive tissue. The two methods are both based on moving least squares (MLS) approximation which requires only a series of nodes in the region of interest, so complicated meshing task can be avoided compared with the finite element method (FEM). Moreover, MLS shape functions are further modified to satisfy the delta function property in one method, which can simplify the processing of boundary conditions in comparison with the other. Finally, the performance of the proposed methods is demonstrated with numerical and physical phantom experiments. PMID:19065170

Qin, Chenghu; Tian, Jie; Yang, Xin; Liu, Kai; Yan, Guorui; Feng, Jinchao; Lv, Yujie; Xu, Min

2008-12-01

287

Carrier transport through a dry-etched InP-based two-dimensional photonic crystal

The electrical conduction across a two-dimensional photonic crystal (PhC) fabricated by Ar/Cl{sub 2} chemically assisted ion beam etching in n-doped InP is influenced by the surface potential of the hole sidewalls, modified by dry etching. Carrier transport across photonic crystal fields with different lattice parameters is investigated. For a given lattice period the PhC resistivity increases with the air fill factor and for a given air fill factor it increases as the lattice period is reduced. The measured current-voltage characteristics show clear ohmic behavior at lower voltages followed by current saturation at higher voltages. This behavior is confirmed by finite element ISE TCAD{sup TM} simulations. The observed current saturation is attributed to electric-field-induced saturation of the electron drift velocity. From the measured and simulated conductance for the different PhC fields we show that it is possible to determine the sidewall depletion region width and hence the surface potential. We find that at the hole sidewalls the etching induces a Fermi level pinning at about 0.12 eV below the conduction band edge, a value much lower than the bare InP surface potential. The results indicate that for n-InP the volume available for conduction in the etched PhCs approaches the geometrically defined volume as the doping is increased.

Berrier, A.; Mulot, M.; Malm, G.; Oestling, M.; Anand, S. [Department of Microelectronics and Applied Physics, Royal Institute of Technology, S-16440 Kista (Sweden)

2007-06-15

288

As a widely used numerical solution for the radiation transport equation (RTE), the discrete ordinates can predict the propagation of photons through biological tissues more accurately relative to the diffusion equation. The discrete ordinates reduce the RTE to a serial of differential equations that can be solved by source iteration (SI). However, the tremendous time consumption of SI, which is partly caused by the expensive computation of each SI step, limits its applications. In this paper, we present a graphics processing unit (GPU) parallel accelerated SI method for discrete ordinates. Utilizing the calculation independence on the levels of the discrete ordinate equation and spatial element, the proposed method reduces the time cost of each SI step by parallel calculation. The photon reflection at the boundary was calculated based on the results of the last SI step to ensure the calculation independence on the level of the discrete ordinate equation. An element sweeping strategy was proposed to detect the calculation independence on the level of the spatial element. A GPU parallel frame called the compute unified device architecture was employed to carry out the parallel computation. The simulation experiments, which were carried out with a cylindrical phantom and numerical mouse, indicated that the time cost of each SI step can be reduced up to a factor of 228 by the proposed method with a GTX 260 graphics card. PMID:21772362

Peng, Kuan; Gao, Xinbo; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; He, Xiaowei; Wang, Xiaorei; Liang, Jimin; Tian, Jie

2011-07-20

289

{delta}f Monte Carlo calculation of neoclassical transport in perturbed tokamaks

Non-axisymmetric magnetic perturbations can fundamentally change neoclassical transport in tokamaks by distorting particle orbits on deformed or broken flux surfaces. This so-called non-ambipolar transport is highly complex, and eventually a numerical simulation is required to achieve its precise description and understanding. A new {delta}f particle orbit code (POCA) has been developed for this purpose using a modified pitch-angle collision operator preserving momentum conservation. POCA was successfully benchmarked for neoclassical transport and momentum conservation in the axisymmetric configuration. Non-ambipolar particle flux is calculated in the non-axisymmetric case, and the results show a clear resonant nature of non-ambipolar transport and magnetic braking. Neoclassical toroidal viscosity (NTV) torque is calculated using anisotropic pressures and magnetic field spectrum, and compared with the combined and 1/{nu} NTV theory. Calculations indicate a clear {delta}B{sup 2} scaling of NTV, and good agreement with the theory on NTV torque profiles and amplitudes depending on collisionality.

Kim, Kimin; Park, Jong-Kyu; Kramer, Gerrit J. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Boozer, Allen H. [Columbia University, New York, New York 10027 (United States)

2012-08-15

290

In this, the first of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, Various aspects of the modelling effort are examined. In particular, the need to save on core memory causes one to use only specific realizations that have certain initial characteristics; in effect, these transport simulations are conditioned by these characteristics. Also, the need to independently estimate length Scales for the generated fields is discussed. The statistical uniformity of the flow field is investigated by plotting the variance of the seepage velocity for vector components in the x, y, and z directions. Finally, specific features of the velocity field itself are illuminated in this first paper. In particular, these data give one the opportunity to investigate the effective hydraulic conductivity in a flow field which is approximately statistically uniform; comparisons are made with first- and second-order perturbation analyses. The mean cloud velocity is examined to ascertain whether it is identical to the mean seepage velocity of the model. Finally, the variance in the cloud centroid velocity is examined for the effect of source size and differing strengths of local transverse dispersion.

Naff, R.L.; Haley, D.F.; Sudicky, E.A.

1998-01-01

291

We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan et al. [Science 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal's azimuthal position.

Liu Yongchun; Xiao Yunfeng; Li Beibei; Jiang Xuefeng; Li Yan; Gong Qihuang [State Key Lab for Mesoscopic Physics, School of Physics, Peking University, Beijing 100871 (China)

2011-07-15

292

NASA Astrophysics Data System (ADS)

We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity-waveguide coupling system and find that it plays a significant role in the photon transportation. On the one hand, this study provides insight into future solid-state cavity quantum electrodynamics aimed at understanding strong-coupling physics. On the other hand, benefitting from this Rayleigh scattering, effects such as dipole-induced transparency and strong photon antibunching can occur simultaneously. As a potential application, this system can function as a high-efficiency photon turnstile. In contrast to B. Dayan [ScienceSCIEAS0036-807510.1126/science.1152261 319, 1062 (2008)], the photon turnstiles proposed here are almost immune to the nanocrystal’s azimuthal position.

Liu, Yong-Chun; Xiao, Yun-Feng; Li, Bei-Bei; Jiang, Xue-Feng; Li, Yan; Gong, Qihuang

2011-07-01

293

Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

M. F. Preston; L. S. Myers; J. R. M. Annand; K. G. Fissum; K. Hansen; L. Isaksson; R. Jebali; M. Lundin

2013-11-22

294

NASA Astrophysics Data System (ADS)

Rate-dependent effects in the electronics used to instrument the tagger focal plane at the MAX IV Laboratory were recently investigated using the novel approach of Monte Carlo simulation to allow for normalization of high-rate experimental data acquired with single-hit time-to-digital converters (TDCs). The instrumentation of the tagger focal plane has now been expanded to include multi-hit TDCs. The agreement between results obtained from data taken using single-hit and multi-hit TDCs demonstrate a thorough understanding of the behavior of the detector system.

Preston, M. F.; Myers, L. S.; Annand, J. R. M.; Fissum, K. G.; Hansen, K.; Isaksson, L.; Jebali, R.; Lundin, M.

2014-04-01

295

Purpose: A number of recent studies have proposed that light emitted by the Cherenkov effect may be used for a number of radiation therapy dosimetry applications. Here we investigate the fundamental nature and accuracy of the technique for the first time by using a theoretical and Monte Carlo based analysis. Methods: Using the GEANT4 architecture for medically-oriented simulations (GAMOS) and BEAMnrc for phase space file generation, the light yield, material variability, field size and energy dependence, and overall agreement between the Cherenkov light emission and dose deposition for electron, proton, and flattened, unflattened, and parallel opposed x-ray photon beams was explored. Results: Due to the exponential attenuation of x-ray photons, Cherenkov light emission and dose deposition were identical for monoenergetic pencil beams. However, polyenergetic beams exhibited errors with depth due to beam hardening, with the error being inversely related to beam energy. For finite field sizes, the error with depth was inversely proportional to field size, and lateral errors in the umbra were greater for larger field sizes. For opposed beams, the technique was most accurate due to an averaging out of beam hardening in a single beam. The technique was found to be not suitable for measuring electron beams, except for relative dosimetry of a plane at a single depth. Due to a lack of light emission, the technique was found to be unsuitable for proton beams. Conclusions: The results from this exploratory study suggest that optical dosimetry by the Cherenkov effect may be most applicable to near monoenergetic x-ray photon beams (e.g. Co-60), dynamic IMRT and VMAT plans, as well as narrow beams used for SRT and SRS. For electron beams, the technique would be best suited for superficial dosimetry, and for protons the technique is not applicable due to a lack of light emission. NIH R01CA109558 and R21EB017559.

Glaser, A [Thayer School of Engineering, Dartmouth College, NH (United States); Zhang, R [Department of Physics and Astronomy, Dartmouth College, Hanover, NH (United States); Gladstone, D [Dartmouth Hitchcock Medical Center, Lebanon, NH (Lebanon); Pogue, B [Thayer School of Engineering, Dartmouth College, NH (United States); Department of Physics and Astronomy, Dartmouth College, Hanover, NH (United States)

2014-06-01

296

A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES

We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

Schnittman, Jeremy D. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Krolik, Julian H., E-mail: jeremy.schnittman@nasa.gov, E-mail: jhk@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)

2013-11-01

297

A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

NASA Technical Reports Server (NTRS)

We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

Schnittman, Jeremy David; Krolik, Julian H.

2013-01-01

298

NASA Astrophysics Data System (ADS)

Hardware accelerators are currently becoming increasingly important in boosting high performance computing sys- tems. In this study, we tested the performance of two accelerator models, NVIDIA Tesla M2090 GPU and Intel Xeon Phi 5110p coprocessor, using a new Monte Carlo photon transport package called ARCHER-CT we have developed for fast CT imaging dose calculation. The package contains three code variants, ARCHER - CTCPU, ARCHER - CTGPU and ARCHER - CTCOP to run in parallel on the multi-core CPU, GPU and coprocessor architectures respectively. A detailed GE LightSpeed Multi-Detector Computed Tomography (MDCT) scanner model and a family of voxel patient phantoms were included in the code to calculate absorbed dose to radiosensitive organs under specified scan protocols. The results from ARCHER agreed well with those from the production code Monte Carlo N-Particle eXtended (MCNPX). It was found that all the code variants were significantly faster than the parallel MCNPX running on 12 MPI processes, and that the GPU and coprocessor performed equally well, being 2.89~4.49 and 3.01~3.23 times faster than the parallel ARCHER - CTCPU running with 12 hyperthreads.

Liu, Tianyu; Xu, X. George; Carothers, Christopher D.

2014-06-01

299

The energy response of plastic scintillators (Eljen Technology EJ-204) to polarized soft gamma-ray photons below 100keV has been studied, primarily for the balloon-borne polarimeter, PoGOLite. The response calculation includes quenching effects due to low-energy recoil electrons and the position dependence of the light collection efficiency in a 20cm long scintillator rod. The broadening of the pulse-height spectrum, presumably caused by

T. Mizuno; Y. Kanai; J. Kataoka; M. Kiss; K. Kurita; M. Pearce; H. Tajima; H. Takahashi; T. Tanaka; M. Ueno; Y. Umeki; H. Yoshida; M. Arimoto; M. Axelsson; C. Marini Bettolo; G. Bogaert; P. Chen; W. Craig; Y. Fukazawa; S. Gunji; T. Kamae; J. Katsuta; N. Kawai; S. Kishimoto; W. Klamra; S. Larsson; G. Madejski; J. S. T. Ng; F. Ryde; S. Rydström; T. Takahashi; T. S. Thurston; G. Varner

2009-01-01

300

The TORT three-dimensional discrete ordinates neutron/photon transport code (TORT version 3)

TORT calculates the flux or fluence of neutrons and/or photons throughout three-dimensional systems due to particles incident upon the system`s external boundaries, due to fixed internal sources, or due to sources generated by interaction with the system materials. The transport process is represented by the Boltzman transport equation. The method of discrete ordinates is used to treat the directional variable, and a multigroup formulation treats the energy dependence. Anisotropic scattering is treated using a Legendre expansion. Various methods are used to treat spatial dependence, including nodal and characteristic procedures that have been especially adapted to resist numerical distortion. A method of body overlay assists in material zone specification, or the specification can be generated by an external code supplied by the user. Several special features are designed to concentrate machine resources where they are most needed. The directional quadrature and Legendre expansion can vary with energy group. A discontinuous mesh capability has been shown to reduce the size of large problems by a factor of roughly three in some cases. The emphasis in this code is a robust, adaptable application of time-tested methods, together with a few well-tested extensions.

Rhoades, W.A.; Simpson, D.B.

1997-10-01

301

Coupled Deterministic/Monte Carlo Simulation of Radiation Transport and Detector Response

The analysis of radiation sensor systems used to detect and identify nuclear and radiological weapons materials requires detailed radiation transport calculations. Two basic steps are required to solve radiation detection scenario analysis (RDSA) problems. First, the radiation field produced by the source must be calculated. Second, the response that the radiation field produces in a detector must be determined. RDSA problems are characterized by complex geometries, the presence of shielding materials, and large amounts of scattering (or absorption/re-emission). In this paper, we will discuss the use of the Attila code [2] for RDSA.

Gesh, Christopher J.; Meriwether, George H.; Pagh, Richard T.; Smith, Leon E.

2005-09-01

302

counter (TEPC) and produce the expected delta ray events when exposed to high energy heavy ions (HZE) like in the galactic cosmic ray (GCR) environment. Accurate transport codes are desirable because of the high cost of beam time, the inability... into the effect of the wall thickness on the response of the TEPC and the range of delta rays in the tissue-equivalent (TE) wall material. A full impact parameter test (from IP = 0 to IP = detector radius) was performed to show that FLUKA produces the expected...

Northum, Jeremy Dell

2011-08-08

303

NASA Astrophysics Data System (ADS)

Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 × 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (Delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 × 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated sw,air values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, sw,air agrees within 0.1% with the value for 10 × 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in sw,air was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought.

Sánchez-Doblado, F.; Andreo, P.; Capote, R.; Leal, A.; Perucha, M.; Arráns, R.; Núñez, L.; Mainegra, E.; Lagares, J. I.; Carrasco, E.

2003-07-01

304

Absolute dosimetry with ionization chambers of the narrow photon fields used in stereotactic techniques and IMRT beamlets is constrained by lack of electron equilibrium in the radiation field. It is questionable that stopping-power ratio in dosimetry protocols, obtained for broad photon beams and quasi-electron equilibrium conditions, can be used in the dosimetry of narrow fields while keeping the uncertainty at the same level as for the broad beams used in accelerator calibrations. Monte Carlo simulations have been performed for two 6 MV clinical accelerators (Elekta SL-18 and Siemens Mevatron Primus), equipped with radiosurgery applicators and MLC. Narrow circular and Z-shaped on-axis and off-axis fields, as well as broad IMRT configured beams, have been simulated together with reference 10 x 10 cm2 beams. Phase-space data have been used to generate 3D dose distributions which have been compared satisfactorily with experimental profiles (ion chamber, diodes and film). Photon and electron spectra at various depths in water have been calculated, followed by Spencer-Attix (delta = 10 keV) stopping-power ratio calculations which have been compared to those used in the IAEA TRS-398 code of practice. For water/air and PMMA/air stopping-power ratios, agreements within 0.1% have been obtained for the 10 x 10 cm2 fields. For radiosurgery applicators and narrow MLC beams, the calculated s(w,air) values agree with the reference within +/-0.3%, well within the estimated standard uncertainty of the reference stopping-power ratios (0.5%). Ionization chamber dosimetry of narrow beams at the photon qualities used in this work (6 MV) can therefore be based on stopping-power ratios data in dosimetry protocols. For a modulated 6 MV broad beam used in clinical IMRT, s(w,air) agrees within 0.1% with the value for 10 x 10 cm2, confirming that at low energies IMRT absolute dosimetry can also be based on data for open reference fields. At higher energies (24 MV) the difference in s(w,air) was up to 1.1%, indicating that the use of protocol data for narrow beams in such cases is less accurate than at low energies, and detailed calculations of the dosimetry parameters involved should be performed if similar accuracy to that of 6 MV is sought. PMID:12894972

Sánchez-Doblado, F; Andreo, P; Capote, R; Leal, A; Perucha, M; Arráns, R; Núñez, L; Mainegra, E; Lagares, J I; Carrasco, E

2003-07-21

305

Electron Transport in Silicon Nanocrystal Devices: From Memory Applications to Silicon Photonics

NASA Astrophysics Data System (ADS)

The push to integrate the realms of microelectronics and photonics on the silicon platform is currently lacking an efficient, electrically pumped silicon light source. One promising material system for photonics on the silicon platform is erbium-doped silicon nanoclusters (Er:Si-nc), which uses silicon nanoclusters to sensitize erbium ions in a SiO2 matrix. This medium can be pumped electrically, and this thesis focuses primarily on the electrical properties of Er:Si-nc films and their possible development as a silicon light source in the erbium emission band around 1.5 micrometers. Silicon nanocrystals can also be used as the floating gate in a flash memory device, and work is also presented examining charge transport in novel systems for flash memory applications. To explore silicon nanocrystals as a potential replacement for metallic floating gates in flash memory, the charging dynamics in silicon nanocrystal films are first studied using UHV-AFM. This approach uses a non-contact AFM tip to locally charge a layer of nanocrystals. Subsequent imaging allows the injected charge to be observed in real time as it moves through the layer. Simulation of this interaction allows the quantication of the charge in the layer, where we find that each nanocrystal is only singly charged after injection, while holes are retained in the film for hours. Work towards developing a dielectric stack with a voltage-tunable barrier is presented, with applications for flash memory and hyperspectral imaging. For hyperspectral imaging applications, film stacks containing various dielectrics are studied using I-V, TEM, and internal photoemission, with barrier tunability demonstrated in the Sc2O3/SiO2 system. To study Er:Si-nc as a potential lasing medium for silicon photonics, a theoretical approach is presented where Er:Si-nc is the gain medium in a silicon slot waveguide. By accounting for the local density of optical states effect on the emitters, and carrier absorption due to electrical pumping, it is shown that a pulsed excitation method is needed to achieve gain in this system. A gain of up to 2 db/cm is predicted for an electrically pumped gain medium 50 nm thick. To test these predictions Er:Si-nc LEDs were fabricated and studied. Reactive oxygen sputtering is found to produce more robust films, and the electrical excitation cross section found is two orders of magnitude larger than the optical cross section. The fabricated devices exhibited low lifetimes and low current densities which prevent observation of gain, and the modeling is used to predict how the films must be improved to achieve gain and lasing in this system.

Miller, Gerald M.

306

Current status and new horizons in Monte Carlo simulation of X-ray CT scanners

With the advent of powerful computers and parallel processing including Grid technology, the use of Monte Carlo (MC) techniques for radiation transport simu- lation has become the most popular method for modeling radiological imaging systems and particularly X-ray com- puted tomography (CT). The stochastic nature of involved processes such as X-ray photons generation, interaction with matter and detection makes MC

Habib Zaidi; Mohammad Reza Ay

2007-01-01

307

Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...

Kim, Sung Jin; Kim, Sung Kyu

2015-01-01

308

The Monte Carlo electron transport code EGS4 was benchmark tested against early experimental results derived by Freyberger. These consist of absolute depth ionization and depth dose curves measured at a pencil beam with sharp energy definition of nominally 4, 10 and 20 MeV electrons extracted from a Betatron. The Freyberger precision measurements have been made with a wide plane-parallel ionization

Marc H. Lauterbach; Jörg Lehmann; Ulf F. Rosenow

1999-01-01

309

The Monte Carlo electron transport code EGS4 was benchmark tested against early experimental results derived by Freyberger. These consist of absolute depth ionization and depth dose curves measured at a pencil beam with sharp energy definition of nominally 4, 10 and 20MeV electrons extracted from a Betatron. The Freyberger precision measurements have been made with a wide plane-parallel ionization chamber

Marc H. Lauterbach; Jörg Lehmann; Ulf F. Rosenow

1999-01-01

310

MCNP/X TRANSPORT IN THE TABULAR REGIME

The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

HUGHES, H. GRADY [Los Alamos National Laboratory

2007-01-08

311

NASA Astrophysics Data System (ADS)

Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or ‘epidermal’, photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively.

Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Chad Webb, R.; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A.

2014-09-01

312

Characterization of temperature and thermal transport properties of the skin can yield important information of relevance to both clinical medicine and basic research in skin physiology. Here we introduce an ultrathin, compliant skin-like, or 'epidermal', photonic device that combines colorimetric temperature indicators with wireless stretchable electronics for thermal measurements when softly laminated on the skin surface. The sensors exploit thermochromic liquid crystals patterned into large-scale, pixelated arrays on thin elastomeric substrates; the electronics provide means for controlled, local heating by radio frequency signals. Algorithms for extracting patterns of colour recorded from these devices with a digital camera and computational tools for relating the results to underlying thermal processes near the skin surface lend quantitative value to the resulting data. Application examples include non-invasive spatial mapping of skin temperature with milli-Kelvin precision (±50?mK) and sub-millimetre spatial resolution. Demonstrations in reactive hyperaemia assessments of blood flow and hydration analysis establish relevance to cardiovascular health and skin care, respectively. PMID:25234839

Gao, Li; Zhang, Yihui; Malyarchuk, Viktor; Jia, Lin; Jang, Kyung-In; Webb, R Chad; Fu, Haoran; Shi, Yan; Zhou, Guoyan; Shi, Luke; Shah, Deesha; Huang, Xian; Xu, Baoxing; Yu, Cunjiang; Huang, Yonggang; Rogers, John A

2014-01-01

313

A full band Monte-Carlo study of carrier transport properties of InAlN lattice matched to GaN

NASA Astrophysics Data System (ADS)

The growing importance of In0:18Al0:82N stems from the fact that it can be grown lattice matched to GaN and for its potential applications in a large number of electronics and optoelectronics devices. In this work we employed a full band Monte-Carlo approach to study the carrier transport properties of this alloy. We have computed the temperature and doping dependent electron and hole mobilities and drift velocities. Furthermore, for both sets of transport coefficients we have developed a number of analytical expressions that can be easily incorporated in drift-diffusion type simulation codes.

Shishehchi, Sara; Bertazzi, Francesco; Bellotti, Enrico

2013-03-01

314

Overview of the MCU Monte Carlo Software Package

NASA Astrophysics Data System (ADS)

MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

2014-06-01

315

Vectorizing and macrotasking Monte Carlo neutral particle algorithms

Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

Heifetz, D.B.

1987-04-01

316

Assessment of intake due to long-lived actinides by inhalation pathway is carried out by lung monitoring of the radiation workers inside totally shielded steel room using sensitive detection systems such as Phoswich and an array of HPGe detectors. In this paper, uncertainties in the lung activity estimation due to positional errors, chest wall thickness (CWT) and detector background variation are evaluated. First, calibration factors (CFs) of Phoswich and an array of three HPGe detectors are estimated by incorporating ICRP male thorax voxel phantom and detectors in Monte Carlo code 'FLUKA'. CFs are estimated for the uniform source distribution in lungs of the phantom for various photon energies. The variation in the CFs for positional errors of ±0.5, 1 and 1.5 cm in horizontal and vertical direction along the chest are studied. The positional errors are also evaluated by resizing the voxel phantom. Combined uncertainties are estimated at different energies using the uncertainties due to CWT, detector positioning, detector background variation of an uncontaminated adult person and counting statistics in the form of scattering factors (SFs). SFs are found to decrease with increase in energy. With HPGe array, highest SF of 1.84 is found at 18 keV. It reduces to 1.36 at 238 keV. PMID:25468992

Nadar, M Y; Akar, D K; Rao, D D; Kulkarni, M S; Pradeepkumar, K S

2014-12-01

317

In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of (99m)Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters' analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses. PMID:23372440

Islamian, Jalil Pirayesh; Toossi, Mohammad Taghi Bahreyni; Momennezhad, Mahdi; Zakavi, Seyyed Rasoul; Sadeghi, Ramin; Ljungberg, Michael

2012-05-01

318

The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is increased from 15 to 33. The COV of the background in the projection-based energy weighting images is only slightly changed as a function of the number of energy bins. In the image-based energy weighting images, when the number of energy bins were increased, the contrast and COV increased and decreased, respectively. The CNR increased as a function of the number of energy bins. It was concluded that the image quality is dependent on the energy window, and an appropriate choice of the energy window is important to improve the image quality. PMID:22800966

Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

2012-08-01

319

NASA Astrophysics Data System (ADS)

The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is increased from 15 to 33. The COV of the background in the projection-based energy weighting images is only slightly changed as a function of the number of energy bins. In the image-based energy weighting images, when the number of energy bins were increased, the contrast and COV increased and decreased, respectively. The CNR increased as a function of the number of energy bins. It was concluded that the image quality is dependent on the energy window, and an appropriate choice of the energy window is important to improve the image quality.

Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung

2012-08-01

320

Transport calculations for a 14.8 MeV neutron beam in a water phantom

NASA Astrophysics Data System (ADS)

A coupled neutron/photon Monte Carlo radiation transport code (MORSE-CG) was used to calculate neutron and photon doses in a water phantom irradiated by 14.8 MeV neutron from the gas target neutron source. The source-collimator-phantom geometry was carefully simulated. Results of calculations utilizing two different statistical estimators (next collision and track length) are presented.

Goetsch, S. J.

321

NASA Astrophysics Data System (ADS)

We have analyzed the spin transport behaviour of four II–VI semiconductor nanowires by simulating spin polarized transport using a semi-classical Monte-Carlo approach. The different scattering mechanisms considered are acoustic phonon scattering, surface roughness scattering, polar optical phonon scattering, and spin flip scattering. The II–VI materials used in our study are CdS, CdSe, ZnO and ZnS. The spin transport behaviour is first studied by varying the temperature (4–500 K) at a fixed diameter of 10 nm and also by varying the diameter (8–12 nm) at a fixed temperature of 300 K. For II–VI compounds, the dominant mechanism is for spin relaxation; D'yakonovPerel and Elliot Yafet have been actively employed in the first order model to simulate the spin transport. The dependence of the spin relaxation length (SRL) on the diameter and temperature has been analyzed.

Chishti, Sabiq; Ghosh, Bahniman; Bishnoi, Bhupesh

2015-02-01

322

Study of water transport phenomena on cathode of PEMFCs using Monte Carlo simulation

NASA Astrophysics Data System (ADS)

This dissertation deals with the development of a three-dimensional computational model of water transport phenomena in the cathode catalyst layer (CCL) of PEMFCs. The catalyst layer in the numerical simulation was developed using the optimized sphere packing algorithm. The optimization technique named the adaptive random search technique (ARSET) was employed in this packing algorithm. The ARSET algorithm will generate the initial location of spheres and allow them to move in the random direction with the variable moving distance, randomly selected from the sampling range, based on the Lennard-jones potential of the current and new configuration. The solid fraction values obtained from this developed algorithm are in the range of 0.631 to 0.6384 while the actual processing time can significantly be reduced by 8% to 36% based on the number of spheres. The initial random number sampling range was investigated and the appropriate sampling range value is equal to 0.5. This numerically developed cathode catalyst layer has been used to simulate the diffusion processes of protons, in the form of hydronium, and oxygen molecules through the cathode catalyst layer. The movements of hydroniums and oxygen molecules are controlled by the random vectors and all of these moves has to obey the Lennard-Jones potential energy constrain. Chemical reaction between these two species will happen when they share the same neighborhood and result in the creation of water molecules. Like hydroniums and oxygen molecules, these newly-formed water molecules also diffuse through the cathode catalyst layer. It is important to investigate and study the distribution of hydronium oxygen molecule and water molecules during the diffusion process in order to understand the lifetime of the cathode catalyst layer. The effect of fuel flow rate on the water distribution has also been studied by varying the hydronium and oxygen molecule input. Based on the results of these simulations, the hydronium: oxygen input ratio of 3:2 has been found to be the best choice for this study. To study the effect of metal impurity and gas contamination on the cathode catalyst layer, the cathode catalyst layer structure is modified by adding the metal impurities and the gas contamination is introduced with the oxygen input. In this study, gas contamination has very little effect on the electrochemical reaction inside the cathode catalyst layer because this simulation is transient in nature and the percentage of the gas contamination is small, in the range of 0.0005% to 0.0015% for CO and 0.028% to 0.04% for CO2 . Metal impurities seem to have more effect on the performance of PEMFC because they not only change the structure of the developed cathode catalyst layer but also affect the movement of fuel and water product. Aluminum has the worst effect on the cathode catalyst layer structure because it yields the lowest amount of newly form water and the largest amount of trapped water product compared to iron of the same impurity percentage. For the iron impurity, it shows some positive effect on the life time of the cathode catalyst layer. At the 0.75 wt% of iron impurity, the amount of newly formed water is 6.59% lower than the pure carbon catalyst layer case but the amount of trapped water product is 11.64% lower than the pure catalyst layer. The lifetime of the impure cathode catalyst layer is longer than the pure one because the amount of water that is still trapped inside the pure cathode catalyst layer is higher than that of the impure one. Even though the impure cathode catalyst layer has a longer lifetime, it sacrifices the electrical power output because the electrochemical reaction occurrence inside the impure catalyst layer is lower.

Soontrapa, Karn

323

NASA Astrophysics Data System (ADS)

A hybrid system containing an asymmetrical waveguide coupled to a whispering-gallery resonator embedded with a two-level atom is designed to investigate single-photon transport properties. The transmission and reflection amplitudes are obtained via the discrete coordinates approach. Numerical simulation demonstrates that a trifrequency photon attenuator is realized by controlling the couplings between the asymmetrical waveguide and the whispering-gallery resonator. The phase shift, group delay and dissipation effects of the transmitted single-photon are also discussed.

Zhou, Tao; Zang, Xiao-Fei; Xu, Dan-Hua

2014-04-01

324

Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for calculation of thermodynamic properties and -- closely related to the latter -- a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e. quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquid-like (rather than gas-like) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e. glueballs, at temperatures just above the phase transition, while meson-like $q\\bar{q}$ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy ions collisions, i.e. in terms the viscosity-to-entropy ratio, $1/4\\pi < \\eta/S < 2.5/4\\pi$, in the temperature range from 170 to 440 MeV.

V. S. Filinov; Yu. B. Ivanov; M. Bonitz; V. E. Fortov; P. R. Levashov

2013-01-29

325

NASA Astrophysics Data System (ADS)

Parametric uncertainty in groundwater modeling is commonly assessed using the first-order-second-moment method, which yields the linear confidence/prediction intervals. More advanced techniques are able to produce the nonlinear confidence/prediction intervals that are more accurate than the linear intervals for nonlinear models. However, both the methods are restricted to certain assumptions such as normality in model parameters. We developed a Markov Chain Monte Carlo (MCMC) method to directly investigate the parametric distributions and confidence/prediction intervals. The MCMC results are used to evaluate accuracy of the linear and nonlinear confidence/prediction intervals. The MCMC method is applied to nonlinear surface complexation models developed by Kohler et al. (1996) to simulate reactive transport of uranium (VI). The breakthrough data of Kohler et al. (1996) obtained from a series of column experiments are used as the basis of the investigation. The calibrated parameters of the models are the equilibrium constants of the surface complexation reactions and fractions of functional groups. The Morris method sensitivity analysis shows that all of the parameters exhibit highly nonlinear effects on the simulation. The MCMC method is combined with traditional optimization method to improve computational efficiency. The parameters of the surface complexation models are first calibrated using a global optimization technique, multi-start quasi-Newton BFGS, which employs an approximation to the Hessian. The parameter correlation is measured by the covariance matrix computed via the Fisher information matrix. Parameter ranges are necessary to improve convergence of the MCMC simulation, even when the adaptive Metropolis method is used. The MCMC results indicate that the parameters do not necessarily follow a normal distribution and that the nonlinear intervals are more accurate than the linear intervals for the nonlinear surface complexation models. In comparison with the linear and nonlinear prediction intervals, the prediction intervals of MCMC are more robust to simulate the breakthrough curves that are not used for the parameter calibration and estimation of parameter distributions.

Miller, G. L.; Lu, D.; Ye, M.; Curtis, G. P.; Mendes, B. S.; Draper, D.

2010-12-01

326

Monte Carlo (MC) method is able to accurately calculate eigenvalues in reactor analysis. Its lengthy computation time can be reduced by general-purpose computing on Graphics Processing Units (GPU), one of the latest parallel computing techniques under development. The method of porting a regular transport code to GPU is usually very straightforward due to the 'embarrassingly parallel' nature of MC code. However, the situation becomes different for eigenvalue calculation in that it will be performed on a generation-by-generation basis and the thread coordination should be explicitly taken care of. This paper presents our effort to develop such a GPU-based MC code in Compute Unified Device Architecture (CUDA) environment. The code is able to perform eigenvalue calculation under simple geometries on a multi-GPU system. The specifics of algorithm design, including thread organization and memory management were described in detail. The original CPU version of the code was tested on an Intel Xeon X5660 2.8 GHz CPU, and the adapted GPU version was tested on NVIDIA Tesla M2090 GPUs. Double-precision floating point format was used throughout the calculation. The result showed that a speedup of 7.0 and 33.3 were obtained for a bare spherical core and a binary slab system respectively. The speedup factor was further increased by a factor of {approx}2 on a dual GPU system. The upper limit of device-level parallelism was analyzed, and a possible method to enhance the thread-level parallelism was proposed. (authors)

Liu, T.; Ding, A.; Ji, W.; Xu, X. G. [Nuclear Engineering and Engineering Physics, Rensselaer Polytechnic Inst., Troy, NY 12180 (United States); Carothers, C. D. [Dept. of Computer Science, Rensselaer Polytechnic Inst. RPI (United States); Brown, F. B. [Los Alamos National Laboratory (LANL) (United States)

2012-07-01

327

Neutron radiography using a transportable superconducting cyclotron

A thermal neutron radiography system based on a compact 12 MeV superconducting proton cyclotron is described. Neutrons are generated using a thick beryllium target and moderated in high density polyethylene. Monte Carlo computer simulations have been used to model the neutron and photon transport in order to optimise the performance of the system. With proton beam currents in excess of

D. A. Allen; M. R. Hawkesworth; T. D. Beynon; S. Green; J. D. Rogers; M. J. Allen; H. C. Plummer; N. J. Boulding; M. Cox; I. McDougall

1994-01-01

328

Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

NASA Astrophysics Data System (ADS)

Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

2008-02-01

329

Purpose: Investigation of increased radiation dose deposition due to gold nanoparticles (GNPs) using a 3D computational cell model during x-ray radiotherapy.Methods: Two GNP simulation scenarios were set up in Geant4; a single 400 nm diameter gold cluster randomly positioned in the cytoplasm and a 300 nm gold layer around the nucleus of the cell. Using an 80 kVp photon beam, the effect of GNP on the dose deposition in five modeled regions of the cell including cytoplasm, membrane, and nucleus was simulated. Two Geant4 physics lists were tested: the default Livermore and custom built Livermore/DNA hybrid physics list. 10{sup 6} particles were simulated at 840 cells in the simulation. Each cell was randomly placed with random orientation and a diameter varying between 9 and 13 {mu}m. A mathematical algorithm was used to ensure that none of the 840 cells overlapped. The energy dependence of the GNP physical dose enhancement effect was calculated by simulating the dose deposition in the cells with two energy spectra of 80 kVp and 6 MV. The contribution from Auger electrons was investigated by comparing the two GNP simulation scenarios while activating and deactivating atomic de-excitation processes in Geant4.Results: The physical dose enhancement ratio (DER) of GNP was calculated using the Monte Carlo model. The model has demonstrated that the DER depends on the amount of gold and the position of the gold cluster within the cell. Individual cell regions experienced statistically significant (p < 0.05) change in absorbed dose (DER between 1 and 10) depending on the type of gold geometry used. The DER resulting from gold clusters attached to the cell nucleus had the more significant effect of the two cases (DER {approx} 55). The DER value calculated at 6 MV was shown to be at least an order of magnitude smaller than the DER values calculated for the 80 kVp spectrum. Based on simulations, when 80 kVp photons are used, Auger electrons have a statistically insignificant (p < 0.05) effect on the overall dose increase in the cell. The low energy of the Auger electrons produced prevents them from propagating more than 250-500 nm from the gold cluster and, therefore, has a negligible effect on the overall dose increase due to GNP.Conclusions: The results presented in the current work show that the primary dose enhancement is due to the production of additional photoelectrons.

Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide, South Australia 5000 (Australia); Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide South Australia 5000 (Australia)

2013-07-15

330

NASA Astrophysics Data System (ADS)

Presently there are no standard protocols for dosimetry in neutron beams for boron neutron capture therapy (BNCT) treatments. Because of the high radiation intensity and of the presence at the same time of radiation components having different linear energy transfer and therefore different biological weighting factors, treatment planning in epithermal neutron fields for BNCT is usually performed by means of Monte Carlo calculations; experimental measurements are required in order to characterize the neutron source and to validate the treatment planning. In this work Monte Carlo simulations in two kinds of tissue-equivalent phantoms are described. The neutron transport has been studied, together with the distribution of the boron dose; simulation results are compared with data taken with Fricke gel dosimeters in form of layers, showing a good agreement.

Bartesaghi, G.; Gambarini, G.; Negri, A.; Carrara, M.; Burian, J.; Viererbl, L.

2010-04-01

331

In case of internal contamination due to long-lived actinides by inhalation or injection pathway, a major portion of activity will be deposited in the skeleton and liver over a period of time. In this study, calibration factors (CFs) of Phoswich and an array of HPGe detectors are estimated using skull and knee voxel phantoms. These phantoms are generated from International Commission of Radiation Protection reference male voxel phantom. The phantoms as well as 20 cm diameter phoswich, having 1.2 cm thick NaI (Tl) primary and 5cm thick CsI (Tl) secondary detector and an array of three HPGe detectors (each of diameter of 7 cm and thickness of 2.5 cm) are incorporated in Monte Carlo code 'FLUKA'. Biokinetic models of Pu, Am, U and Th are solved using default parameters to identify different parts of the skeleton where activity will accumulate after an inhalation intake of 1 Bq. Accordingly, CFs are evaluated for the uniform source distribution in trabecular bone and bone marrow (TBBM), cortical bone (CB) as well as in both TBBM and CB regions for photon energies of 18, 60, 63, 74, 93, 185 and 238 keV describing sources of (239)Pu, (241)Am, (238)U, (235)U and (232)Th. The CFs are also evaluated for non-uniform distribution of activity in TBBM and CB regions. The variation in the CFs for source distributed in different regions of the bones is studied. The assessment of skeletal activity of actinides from skull and knee activity measurements is discussed along with the errors. PMID:24435911

Nadar, M Y; Akar, D K; Patni, H K; Singh, I S; Mishra, L; Rao, D D; Pradeepkumar, K S

2014-12-01

332

A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63-82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management.

Bednarz, Bryan; Xu, X. George [Nuclear Engineering and Engineering Physics Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)

2008-07-15

333

A Monte Carlo-based procedure to assess fetal doses from 6-MV external photon beam radiation treatments has been developed to improve upon existing techniques that are based on AAPM Task Group Report 36 published in 1995 [M. Stovall et al., Med. Phys. 22, 63–82 (1995)]. Anatomically realistic models of the pregnant patient representing 3-, 6-, and 9-month gestational stages were implemented into the MCNPX code together with a detailed accelerator model that is capable of simulating scattered and leakage radiation from the accelerator head. Absorbed doses to the fetus were calculated for six different treatment plans for sites above the fetus and one treatment plan for fibrosarcoma in the knee. For treatment plans above the fetus, the fetal doses tended to increase with increasing stage of gestation. This was due to the decrease in distance between the fetal body and field edge with increasing stage of gestation. For the treatment field below the fetus, the absorbed doses tended to decrease with increasing gestational stage of the pregnant patient, due to the increasing size of the fetus and relative constant distance between the field edge and fetal body for each stage. The absorbed doses to the fetus for all treatment plans ranged from a maximum of 30.9 cGy to the 9-month fetus to 1.53 cGy to the 3-month fetus. The study demonstrates the feasibility to accurately determine the absorbed organ doses in the mother and fetus as part of the treatment planning and eventually in risk management. PMID:18697528

Bednarz, Bryan; Xu, X. George

2008-01-01

334

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

335

A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as 'splitting-roulette' was implemented on the Monte Carlo code [Formula: see text] and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and 'selective splitting'. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code [Formula: see text]. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. PMID:22538321

Rodriguez, M; Sempau, J; Brualla, L

2012-05-21

336

NASA Astrophysics Data System (ADS)

A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

Rodriguez, M.; Sempau, J.; Brualla, L.

2012-05-01

337

Automated variance reduction for Monte Carlo shielding analyses with MCNP

NASA Astrophysics Data System (ADS)

Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.

Radulescu, Georgeta

338

High-energy photon transport modeling for oil-well logging

Nuclear oil well logging tools utilizing radioisotope sources of photons are used ubiquitously in oilfields throughout the world. Because of safety and security concerns, there is renewed interest in shifting to ...

Johnson, Erik D., Ph. D. Massachusetts Institute of Technology

2009-01-01

339

The Department of Energy (DOE) has given the Spallation Neutron Source (SNS) project approval to begin Title I design of the proposed facility to be built at Oak Ridge National Laboratory (ORNL) and construction is scheduled to commence in FY01 . The SNS initially will consist of an accelerator system capable of delivering an {approximately}0.5 microsecond pulse of 1 GeV protons, at a 60 Hz frequency, with 1 MW of beam power, into a single target station. The SNS will eventually be upgraded to a 2 MW facility with two target stations (a 60 Hz station and a 10 Hz station). The radiation transport analysis, which includes the neutronic, shielding, activation, and safety analyses, is critical to the design of an intense high-energy accelerator facility like the proposed SNS, and the Monte Carlo method is the cornerstone of the radiation transport analyses.

Johnson, J.O.

2000-10-23

340

Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning

An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661

Jabbari, Keyvan

2011-01-01

341

therein), Monte Carlo collision technique and 3D dielectric breakdown models for the branched streamer in mixtures of molecular nitrogen and oxygen which may serve as the basis for modeling physical and chemical of a multitude of streamers [5]. Therefore accurate modeling and simulation of streamers is of high interest

Ebert, Ute

342

NASA Astrophysics Data System (ADS)

We study the nonequilibrium phenomena through the quantum dot coupled to the normal and superconducting leads by means of a continuous-time quantum Monte Carlo method in the Nambu formalism. Calculating the time evolution of the current, we discuss how the system approaches the steady state after the sudden interaction quench. The sign problem in the method is also addressed.

Koga, Akihisa

343

This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.

Wagner, John C [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL

2014-01-01

344

Purpose: The authors describe a detailed Monte Carlo (MC) method for the coupled transport of ionizing particles and charge carriers in amorphous selenium (a-Se) semiconductor x-ray detectors, and model the effect of statistical variations on the detected signal. Methods: A detailed transport code was developed for modeling the signal formation process in semiconductor x-ray detectors. The charge transport routines include three-dimensional spatial and temporal models of electron-hole pair transport taking into account recombination and trapping. Many electron-hole pairs are created simultaneously in bursts from energy deposition events. Carrier transport processes include drift due to external field and Coulombic interactions, and diffusion due to Brownian motion. Results: Pulse-height spectra (PHS) have been simulated with different transport conditions for a range of monoenergetic incident x-ray energies and mammography radiation beam qualities. Two methods for calculating Swank factors from simulated PHS are shown, one using the entire PHS distribution, and the other using the photopeak. The latter ignores contributions from Compton scattering and K-fluorescence. Comparisons differ by approximately 2% between experimental measurements and simulations. Conclusions: The a-Se x-ray detector PHS responses simulated in this work include three-dimensional spatial and temporal transport of electron-hole pairs. These PHS were used to calculate the Swank factor and compare it with experimental measurements. The Swank factor was shown to be a function of x-ray energy and applied electric field. Trapping and recombination models are all shown to affect the Swank factor.

Fang Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States) and Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States); Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L3G1 (Canada); Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)

2012-01-15

345

NASA Astrophysics Data System (ADS)

The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface.

Ma, Tzuhsuan; Khanikaev, Alexander B.; Mousavi, S. Hossein; Shvets, Gennady

2015-03-01

346

The wave nature of radiation prevents its reflections-free propagation around sharp corners. We demonstrate that a simple photonic structure based on a periodic array of metallic cylinders attached to one of the two confining metal plates can emulate spin-orbit interaction through bianisotropy. Such a metawaveguide behaves as a photonic topological insulator with complete topological band gap. An interface between two such structures with opposite signs of the bianisotropy supports topologically protected surface waves, which can be guided without reflections along sharp bends of the interface. PMID:25860770

Ma, Tzuhsuan; Khanikaev, Alexander B; Mousavi, S Hossein; Shvets, Gennady

2015-03-27

347

Galerkin-based meshless methods for photon transport in the biological tissue

. O. Box 2728, Beijing, 100190, China tian@ieee.org Abstract: As an important small animal imaging, "Looking and listening to light: the evolution of whole body photonic imaging," Nat. Biotechnol. 23, 313. Cable, and M. B. Nelson, "In vivo imaging of light-emitting probes," J. Biomed. Opt. 6, 432-440 (2001

Tian, Jie

348

The paper is intended to show the effect of a biological shielding simulator on fast neutron and photon transport in its vicinity. The fast neutron and photon fluxes were measured by means of scintillation spectroscopy using a 45×45 mm(2) and a 10×10 mm(2) cylindrical stilbene detector. The neutron spectrum was measured in the range of 0.6-10 MeV and the photon spectrum in 0.2-9 MeV. The results of the experiment are compared with calculations. The calculations were performed with various nuclear data libraries. PMID:23434890

Koš?ál, Michal; Cvachovec, František; Mil?ák, Ján; Mravec, Filip

2013-05-01

349

The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

Cramer, S.N.

1984-01-01

350

NASA Astrophysics Data System (ADS)

Dose calculation is a central part of treatment planning. The dose calculation must be 1) accurate so that the medical physicists and the radio-oncologists can make a decision based on results close to reality and 2) fast enough to allow a routine use of dose calculation. The compromise between these two factors in opposition gave way to the creation of several dose calculation algorithms, from the most approximate and fast to the most accurate and slow. The most accurate of these algorithms is the Monte Carlo method, since it is based on basic physical principles. Since 2007, a new computing platform gains popularity in the scientific computing community: the graphics processor unit (GPU). The hardware platform exists since before 2007 and certain scientific computations were already carried out on the GPU. Year 2007, on the other hand, marks the arrival of the CUDA programming language which makes it possible to disregard graphic contexts to program the GPU. The GPU is a massively parallel computing platform and is adapted to data parallel algorithms. This thesis aims at knowing how to maximize the use of a graphics processing unit (GPU) to speed up the execution of a Monte Carlo simulation for radiotherapy dose calculation. To answer this question, the GPUMCD platform was developed. GPUMCD implements the simulation of a coupled photon-electron Monte Carlo simulation and is carried out completely on the GPU. The first objective of this thesis is to evaluate this method for a calculation in external radiotherapy. Simple monoenergetic sources and phantoms in layers are used. A comparison with the EGSnrc platform and DPM is carried out. GPUMCD is within a gamma criteria of 2%-2mm against EGSnrc while being at least 1200x faster than EGSnrc and 250x faster than DPM. The second objective consists in the evaluation of the platform for brachytherapy calculation. Complex sources based on the geometry and the energy spectrum of real sources are used inside a TG-43 reference geometry. Differences of less than 4% are found compared to the BrachyDose platforms well as TG-43 consensus data. The third objective aims at the use of GPUMCD for dose calculation within MRI-Linac environment. To this end, the effect of the magnetic field on charged particles has been added to the simulation. It was shown that GPUMCD is within a gamma criteria of 2%-2mm of two experiments aiming at highlighting the influence of the magnetic field on the dose distribution. The results suggest that the GPU is an interesting computing platform for dose calculations through Monte Carlo simulations and that software platform GPUMCD makes it possible to achieve fast and accurate results.

Hissoiny, Sami

351

Purpose: The authors describe the modification to a previously developed Monte Carlo model of semiconductor direct x-ray detector required for studying the effect of burst and recombination algorithms on detector performance. This work provides insight into the effect of different charge generation models for a-Se detectors on Swank noise and recombination fraction. Methods: The proposed burst and recombination models are implemented in the Monte Carlo simulation package, ARTEMIS, developed byFang et al. [“Spatiotemporal Monte Carlo transport methods in x-ray semiconductor detectors: Application to pulse-height spectroscopy in a-Se,” Med. Phys. 39(1), 308–319 (2012)]. The burst model generates a cloud of electron-hole pairs based on electron velocity, energy deposition, and material parameters distributed within a spherical uniform volume (SUV) or on a spherical surface area (SSA). A simple first-hit (FH) and a more detailed but computationally expensive nearest-neighbor (NN) recombination algorithms are also described and compared. Results: Simulated recombination fractions for a single electron-hole pair show good agreement with Onsager model for a wide range of electric field, thermalization distance, and temperature. The recombination fraction and Swank noise exhibit a dependence on the burst model for generation of many electron-hole pairs from a single x ray. The Swank noise decreased for the SSA compared to the SUV model at 4 V/?m, while the recombination fraction decreased for SSA compared to the SUV model at 30 V/?m. The NN and FH recombination results were comparable. Conclusions: Results obtained with the ARTEMIS Monte Carlo transport model incorporating drift and diffusion are validated with the Onsager model for a single electron-hole pair as a function of electric field, thermalization distance, and temperature. For x-ray interactions, the authors demonstrate that the choice of burst model can affect the simulation results for the generation of many electron-hole pairs. The SSA model is more sensitive to the effect of electric field compared to the SUV model and that the NN and FH recombination algorithms did not significantly affect simulation results.

Fang, Yuan, E-mail: yuan.fang@fda.hhs.gov [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 and Department of Electrical and Computer Engineering, The University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Karim, Karim S. [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada)] [Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Badano, Aldo [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)] [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993-0002 (United States)

2014-01-15

352

Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry.

Randolph Schwarz; Leland L. Carter; Alysia Schwarz

2005-08-23

353

The role of plasma evolution and photon transport in optimizing future advanced lithography sources

NASA Astrophysics Data System (ADS)

Laser produced plasma (LPP) sources for extreme ultraviolet (EUV) photons are currently based on using small liquid tin droplets as target that has many advantages including generation of stable continuous targets at high repetition rate, larger photons collection angle, and reduced contamination and damage to the optical mirror collection system from plasma debris and energetic particles. The ideal target is to generate a source of maximum EUV radiation output and collection in the 13.5 nm range with minimum atomic debris. Based on recent experimental results and our modeling predictions, the smallest efficient droplets are of diameters in the range of 20-30 ?m in LPP devices with dual-beam technique. Such devices can produce EUV sources with conversion efficiency around 3% and with collected EUV power of 190 W or more that can satisfy current requirements for high volume manufacturing. One of the most important characteristics of these devices is in the low amount of atomic debris produced due to the small initial mass of droplets and the significant vaporization rate during the pre-pulse stage. In this study, we analyzed in detail plasma evolution processes in LPP systems using small spherical tin targets to predict the optimum droplet size yielding maximum EUV output. We identified several important processes during laser-plasma interaction that can affect conditions for optimum EUV photons generation and collection. The importance and accurate description of modeling these physical processes increase with the decrease in target size and its simulation domain.

Sizyuk, Tatyana; Hassanein, Ahmed

2013-08-01

354

The control of the electron temperature and charged particle transport in negative hydrogen ion sources has a crucial role for the performance of the system. It is usually achieved by the use of a magnetic filter--localized transverse magnetic field, which reduces the electron temperature and enhances the negative ion yield. There are several works in literature on modeling of the magnetic filter effects based on fluid and kinetic modeling, which, however, suggest rather different mechanisms responsible for the electron cooling and particle transport through the filter. Here a kinetic modeling of the problem based on the particle-in-cell with Monte Carlo collisions method is presented. The charged particle transport across a magnetic filter is studied in hydrogen plasmas with and without including volume production of negative ions, in a one-dimensional Cartesian geometry. The simulation shows a classical (collisional) electron diffusion across the magnetic filter with reduction in the electron temperature but no selective effect in electron energy is observed (Coulomb collisions are not considered). When a bias voltage is applied, the plasma is split into an upstream electropositive and a downstream electronegative regions. Different configurations with respect to bias voltage and magnetic field strength are examined and discussed. Although the bias voltage allows negative ion extraction, the results show that volume production of negative ions in the downstream region is not really enhanced by the magnetic filter.

Kolev, St.; Hagelaar, G. J. M.; Boeuf, J. P. [Laboratoire Plasma et Conversion d'Energie (LAPLACE), Universite Paul Sabatier, Bt. 3R2, 118 Route de Narbonne, 31062 Toulouse Cedex 9 (France)

2009-04-15

355

An Xwindow application capable of importing geometric information directly from two Computer Aided Design (CAD) based formats for use in radiation transport and shielding analyses is being developed at ORNL. The application permits the user to graphically view the geometric models imported from the two formats for verification and debugging. Previous models, specifically formatted for the radiation transport and shielding codes can also be imported. Required extensions to the existing combinatorial geometry analysis routines are discussed. Examples illustrating the various options and features which will be implemented in the application are presented. The use of the application as a visualization tool for the output of the radiation transport codes is also discussed.

Burns, T.J.

1994-03-01

356

NASA Astrophysics Data System (ADS)

We analyze the dynamics of single-photon transport in a single-mode waveguide coupled to a micro-optical resonator by using a fully quantum-mechanical model. We examine the propagation of a single-photon Gaussian packet through the system under various coupling conditions. We review the theory of single-photon transport phenomena as applied to the system and we develop a discussion on the numerical technique we used to solve for dynamical behavior of the quantized field. To demonstrate our method and to establish robust single-photon results, we study the process of adiabatically lowering or raising the energy of a single photon trapped in an optical resonator under active tuning of the resonator. We show that our fully quantum-mechanical approach reproduces the semiclassical result in the appropriate limit and that the adiabatic invariant has the same form in each case. Finally, we explore the trapping of a single photon in a system of dynamically tuned, coupled optical cavities.

Hach, Edwin E., III; Elshaari, Ali W.; Preble, Stefan F.

2010-12-01

357

We present the implementation, validation, and performance of a Neumann-series approach for simulating light propagation at optical wavelengths in uniform media using the radiative transport equation (RTE). The RTE is solved for an anisotropic-scattering medium in a spherical harmonic basis for a diffuse-optical-imaging setup. The main objectives of this paper are threefold: to present the theory behind the Neumann-series form for the RTE, to design and develop the mathematical methods and the software to implement the Neumann series for a diffuse-optical-imaging setup, and, finally, to perform an exhaustive study of the accuracy, practical limitations, and computational efficiency of the Neumann-series method. Through our results, we demonstrate that the Neumann-series approach can be used to model light propagation in uniform media with small geometries at optical wavelengths. PMID:23201893

Jha, Abhinav K.; Kupinski, Matthew A.; Masumura, Takahiro; Clarkson, Eric; Maslov, Alexey V.; Barrett, Harrison H.

2014-01-01

358

PEREGRINE: An all-particle Monte Carlo code for radiation therapy

The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources.

Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.

1994-09-01

359

We study a class of methods for the numerical solution of the system of stochastic differential equations (SDEs) that arises in the modeling of turbulent combustion, specifically in the Monte Carlo particle method for the solution of the model equations for the composition probability density function (PDF) and the filtered density function (FDF). This system consists of an SDE for particle position and a random differential equation for particle composition. The numerical methods considered advance the solution in time with (weak) second-order accuracy with respect to the time step size. The four primary contributions of the paper are: (i) establishing that the coefficients in the particle equations can be frozen at the mid-time (while preserving second-order accuracy), (ii) examining the performance of three existing schemes for integrating the SDEs, (iii) developing and evaluating different splitting schemes (which treat particle motion, reaction and mixing on different sub-steps), and (iv) developing the method of manufactured solutions (MMS) to assess the convergence of Monte Carlo particle methods. Tests using MMS confirm the second-order accuracy of the schemes. In general, the use of frozen coefficients reduces the numerical errors. Otherwise no significant differences are observed in the performance of the different SDE schemes and splitting schemes.

Wang Haifeng [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)], E-mail: hw98@cornell.edu; Popov, Pavel P.; Pope, Stephen B. [Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853 (United States)

2010-03-01

360

Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.

Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310

Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L

2003-02-01

361

Fast Monte Carlo for radiation therapy: the PEREGRINE Project

The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

1997-11-11

362

High-speed DC transport of emergent monopoles in spinor photonic fluids.

We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles. PMID:25083658

Terças, H; Solnyshkov, D D; Malpuech, G

2014-07-18

363

High-Speed DC Transport of Emergent Monopoles in Spinor Photonic Fluids

NASA Astrophysics Data System (ADS)

We investigate the spin dynamics of half-solitons in quantum fluids of interacting photons (exciton polaritons). Half-solitons, which behave as emergent monopoles, can be accelerated by the presence of effective magnetic fields. We study the generation of dc magnetic currents in a gas of half-solitons. At low densities, the current is suppressed due to the dipolar oscillations. At moderate densities, a magnetic current is recovered as a consequence of the collisions between the carriers. We show a deviation from Ohm's law due to the competition between dipoles and monopoles.

Terças, H.; Solnyshkov, D. D.; Malpuech, G.

2014-07-01

364

MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

Marcus, Ryan C. [Los Alamos National Laboratory

2012-07-25

365

Akinetic crisis (AC) is akin to neuroleptic malignant syndrome (NMS) and is the most severe and possibly lethal complication of parkinsonism. Diagnosis is today based only on clinical assessments yet is often marred by concomitant precipitating factors. Our purpose is to evidence that AC and NMS can be reliably evidenced by FP/CIT single-photon emission computerized tomography (SPECT) performed during the crisis.Prospective cohort evaluation in 6 patients. In 5 patients, affected by Parkinson disease or Lewy body dementia, the crisis was categorized as AC. One was diagnosed as having NMS because of exposure to risperidone. In all FP/CIT, SPECT was performed in the acute phase. SPECT was repeated 3 to 6 months after the acute event in 5 patients. Visual assessments and semiquantitative evaluations of binding potentials (BPs) were used. To exclude the interference of emergency treatments, FP/CIT BP was also evaluated in 4 patients currently treated with apomorphine.During AC or NMS, BP values in caudate and putamen were reduced by 95% to 80%, to noise level with a nearly complete loss of striatum dopamine transporter-binding, corresponding to the "burst striatum" pattern. The follow-up re-evaluation in surviving patients showed a recovery of values to the range expected for Parkinsonisms of same disease duration. No binding effects of apomorphine were observed.By showing the outstanding binding reduction, presynaptic dopamine transporter ligand can provide instrumental evidence of AC in Parkinsonism and NMS. PMID:25837755

Martino, G; Capasso, M; Nasuti, M; Bonanni, L; Onofrj, M; Thomas, A

2015-04-01

366

NASA Astrophysics Data System (ADS)

Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

Alexander, Andrew William

367

Transport properties of disordered photonic crystals around a Dirac-like point.

At the Dirac-like point at the Brillouin zone center, the photonic crystals (PhCs) can mimic a zero-index medium. In the band structure, an additional flat band of longitudinal mode will intersect the Dirac cone. This longitudinal mode can be excited in PhCs with finite sizes at the Dirac-like point. By introducing positional shift in the PhCs, we study the dependence of the longitudinal mode on the disorder. At the Dirac-like point, the transmission peak induced by the longitudinal mode decreases as the random degree increases. However, at a frequency slightly above the Dirac-like point, in which the longitudinal mode is absent, the transmission is insensitive to the disorder because the effective index is still near zero and the effective wavelength in the PhC is very large. PMID:25836546

Wang, Xiao; Jiang, Haitao; Li, Yuan; Yan, Chao; Deng, Fusheng; Sun, Yong; Li, Yunhui; Shi, Yunlong; Chen, Hong

2015-02-23

368

NASA Astrophysics Data System (ADS)

A new phenomenological approach is developed to reproduce the stochastic distributions of secondary particle energy and angle with conservation of momentum and energy in reactions ejecting more than one ejectiles using inclusive cross-section data. The summation of energy and momentum in each reaction is generally not conserved in Monte-Carlo particle transport simulation based on the inclusive cross-sections because the particle correlations are lost in the inclusive cross-section data. However, the energy and angular distributions are successfully reproduced by randomly generating numerous sets of secondary particle configurations which are compliant with the conservation laws, and sampling one set considering their likelihood. This developed approach was applied to simulation of (n,xn) reactions (x?2) of various targets and to other reactions such as (n,np) and (n,2n?). The calculated secondary particle energy and angular distributions were compared with those of the original inclusive cross-section data to validate the algorithm. The calculated distributions reproduce the trend of original cross-section data considerably well especially in case of heavy targets. The developed algorithm is beneficial to improve the accuracy of event-by-event analysis in particle transport simulation.

Ogawa, T.; Sato, T.; Hashimoto, S.; Niita, K.

2014-11-01

369

NASA Astrophysics Data System (ADS)

There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung and how modern dose calculation algorithms compare to Monte Carlo data is presented in this research project. This work can be used as a basis to further refine an algorithm's accuracy in low-density media or to correct prior dosimetric results.

Jones, Andrew Osler

370

Photon Maps Photon Tracing Simulating light propagation by shooting photons from the light sources. Photon Tracing Storing the incidences of photon's path. Implementing surface properties statistically. Russian Roulette. Photon Tracing Photon maps keep: Incidence point (in 3D). The normal at that point

Lischinski, Dani

371

Monte Carlo simulation of amorphous selenium imaging detectors

NASA Astrophysics Data System (ADS)

We present a Monte Carlo (MC) simulation method for studying the signal formation process in amorphous Selenium (a-Se) imaging detectors for design validation and optimization of direct imaging systems. The assumptions and limitations of the proposed and previous models are examined. The PENELOPE subroutines for MC simulation of radiation transport are used to model incident x-ray photon and secondary electron interactions in the photoconductor. Our simulation model takes into account applied electric field, atomic properties of the photoconductor material, carrier trapping by impurities, and bimolecular recombination between drifting carriers. The particle interaction cross-sections for photons and electrons are generated for Se over the energy range of medical imaging applications. Since inelastic collisions of secondary electrons lead to the creation of electron-hole pairs in the photoconductor, the electron inelastic collision stopping power is compared for PENELOPE's Generalized Oscillator Strength model with the established EEDL and NIST ESTAR databases. Sample simulated particle tracks for photons and electrons in Se are presented, along with the energy deposition map. The PENEASY general-purpose main program is extended with custom transport subroutines to take into account generation and transport of electron-hole pairs in an electromagnetic field. The charge transport routines consider trapping and recombination, and the energy required to create a detectable electron-hole pair can be estimated from simulations. This modular simulation model is designed to model complete image formation.

Fang, Yuan; Badal, Andreu; Allec, Nicholas; Karim, Karim S.; Badano, Aldo

2010-04-01

372

NASA Astrophysics Data System (ADS)

Transient characteristics of wurtzite Zn1-xMgxO are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn1-xMgxO (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 107, 2.133 × 107, 1.889 × 107, 1.295 × 107 cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn0.889Mg0.111O and Zn0.833Mg0.167O respectively, while it is not observed for Zn0.806Mg0.194O and Zn0.75Mg0.25O even at 3000 kV/cm which is especially important for high frequency devices.

Wang, Ping; Hu, Linlin; Yang, Yintang; Shan, Xuefei; Song, Jiuxu; Guo, Lixin; Zhang, Zhiyong

2015-01-01

373

Monte Carlo algorithms are developed to calculate the ensemble-average particle leakage through the boundaries of a 2-D binary stochastic material. The mixture is specified within a rectangular area and consists of a fixed number of disks of constant radius randomly embedded in a matrix material. The algorithms are extensions of the proposal of Zimmerman et al., using chord-length sampling to eliminate the need to explicitly model the geometry of the mixture. Two variations are considered. The first algorithm uses Chord-Length Sampling (CLS) for both material regions. The second algorithm employs Limited Chord Length Sampling (LCLS), only using chord-length sampling in the matrix material. Ensemble-average leakage results are computed for a range of material interaction coefficients and compared against benchmark results for both accuracy and efficiency. both algorithms are exact for purely absorbing materials and provide decreasing accuracy as scattering is increased in the matrix material. The LCLS algorithm shows a better accuracy than the CLS algorithm for all cases while maintaining an equivalent or better efficiency. Accuracy and efficiency problems with the CLS algorithm are due principally to assumptions made in determining the chord-length distribution within the disks.

T.J. Donovan; Y. Danon

2002-03-15

374

Background Significant hepatobiliary accumulation of technetium 99m-labeled cardiac perfusion agents has been shown to cause alterations\\u000a in the apparent localization of the agents in the cardiac walls. A Monte Carlo study was conducted to investigate the hypothesis\\u000a that the cardiac count changes are due to the inconsistencies in the projection data input to reconstruction, and that correction\\u000a of the causes of

Michael A. King; Weishi Xia; Daniel J. deVries; Tin-Su Pan; Benard J. Villegas; Seth Dahlberg; Benjamin M. W. Tsui; Michael H. Ljungberg; Hugh T. Morgan

1996-01-01

375

A system and method is disclosed for radiation dose calculation within sub-volumes of a particle transport grid. In a first step of the method voxel volumes enclosing a first portion of the target mass are received. A second step in the method defines dosel volumes which enclose a second portion of the target mass and overlap the first portion. A third step in the method calculates common volumes between the dosel volumes and the voxel volumes. A fourth step in the method identifies locations in the target mass of energy deposits. And, a fifth step in the method calculates radiation doses received by the target mass within the dosel volumes. A common volume calculation module inputs voxel volumes enclosing a first portion of the target mass, inputs voxel mass densities corresponding to a density of the target mass within each of the voxel volumes, defines dosel volumes which enclose a second portion of the target mass and overlap the first portion, and calculates common volumes between the dosel volumes and the voxel volumes. A dosel mass module, multiplies the common volumes by corresponding voxel mass densities to obtain incremental dosel masses, and adds the incremental dosel masses corresponding to the dosel volumes to obtain dosel masses. A radiation transport module identifies locations in the target mass of energy deposits. And, a dose calculation module, coupled to the common volume calculation module and the radiation transport module, for calculating radiation doses received by the target mass within the dosel volumes.

Bergstrom, Paul M. (Livermore, CA); Daly, Thomas P. (Livermore, CA); Moses, Edward I. (Livermore, CA); Patterson, Jr., Ralph W. (Livermore, CA); Schach von Wittenau, Alexis E. (Livermore, CA); Garrett, Dewey N. (Livermore, CA); House, Ronald K. (Tracy, CA); Hartmann-Siantar, Christine L. (Livermore, CA); Cox, Lawrence J. (Los Alamos, NM); Fujino, Donald H. (San Leandro, CA)

2000-01-01

376

To provide asymmetric propagation of light, we propose a graded index photonic crystal (GRIN PC) based waveguide configuration that is formed by introducing line and point defects as well as intentional perturbations inside the structure. The designed system utilizes isotropic materials and is purely reciprocal, linear, and time-independent, since neither magneto-optical materials are used nor time-reversal symmetry is broken. The numerical results show that the proposed scheme based on the spatial-inversion symmetry breaking has different forward (with a peak value of 49.8%) and backward transmissions (4.11% at most) as well as relatively small round-trip transmission (at most 7.11%) in a large operational bandwidth of 52.6?nm. The signal contrast ratio of the designed configuration is above 0.80 in the telecom wavelengths of 1523.5–1576.1?nm. An experimental measurement is also conducted in the microwave regime: A strong asymmetric propagation characteristic is observed within the frequency interval of 12.8 GHz–13.3?GHz. The numerical and experimental results confirm the asymmetric transmission behavior of the proposed GRIN PC waveguide.

Giden, I. H., E-mail: igiden@etu.edu.tr; Yilmaz, D.; Turduev, M.; Kurt, H. [Nanophotonics Research Laboratory, Department of Electrical and Electronics Engineering, TOBB University of Economics and Technology, Ankara 06560 (Turkey); Çolak, E. [Electrical and Electronics Engineering Department, Ankara University, Gölbasi, Ankara 06830 (Turkey); Ozbay, E. [Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey)

2014-01-20

377

Effects of breaking various symmetries on optical properties in ordered materials have been studied. Photonic crystals lacking space-inversion and time-reversal symmetries were shown to display nonreciprocal dispersion ...

Bita, Ion

2006-01-01

378

NASA Astrophysics Data System (ADS)

For large, highly detailed models, Monte Carlo simulations may spend a large fraction of their run-time performing simple point location and distance to surface calculations for every geometric component in a model. In such cases, the use of bounding boxes (axis-aligned boxes that bound each geometric component) can improve particle tracking efficiency and decrease overall simulation run time significantly. In this paper we present a robust and efficient algorithm for generating the numerically-optimal bounding box (optimal to within a user-specified tolerance) for an arbitrary Constructive Solid Geometry (CSG) object defined by quadratic surfaces. The new algorithm uses an iterative refinement to tighten an initial, conservatively large, bounding box into the numerically-optimal bounding box. At each stage of refinement, the algorithm subdivides the candidate bounding box into smaller boxes, which are classified as inside, outside, or intersecting the boundary of the component. In cases where the algorithm cannot unambiguously classify a box, the box is refined further. This process continues until the refinement near the component's extremal points reach the user-selected tolerance level. This refinement/classification approach is more efficient and practical than methods that rely on computing actual boundary representations or sampling to determine the extent of an arbitrary CSG component. A complete description of the bounding box algorithm is presented, along with a proof that the algorithm is guaranteed to converge to within specified tolerance of the true optimal bounding box. The paper also provides a discussion of practical implementation details for the algorithm as well as numerical results highlighting performance and accuracy for several representative CSG components.

Millman, David L.; Griesheimer, David P.; Nease, Brian R.; Snoeyink, Jack

2014-06-01

379

Two-photon transport in a waveguide coupled to a cavity in a two-level system

We study two-photon effects for a cavity quantum electrodynamics system where a waveguide is coupled to a cavity embedded in a two-level system. The wave function of two-photon scattering is exactly solved by using the Lehmann-Symanzik-Zimmermann reduction. Our results about quantum statistical properties of the outgoing photons explicitly exhibit the photon blockade effects in the strong-coupling regime. These results agree with the observations of recent experiments.

Shi, T.; Sun, C. P. [Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Fan Shanhui [Ginzton Laboratory, Stanford University, Stanford, California 94305 (United States)

2011-12-15

380

Differential pencil beam dose computation model for photons.

Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density. PMID:3951411

Mohan, R; Chui, C; Lidofsky, L

1986-01-01

381

Independent pixel and Monte Carlo estimates of stratocumulus albedo

NASA Technical Reports Server (NTRS)

Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller-scale variability, where the radiative transfer is more three-dimensional, contributes less to the plane-parallel albedo bias than the larger scales, which are more variable. The lack of significant three-dimensional effects also relies on the assumption of a relatively simple geometry. Even with these assumptions, the independent pixel approximation is accurate only for fluxes averaged over large horizontal areas, many photon mean free paths in diameter, and not for local radiance values, which depend strongly on the interaction between neighboring cloud elements.

Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

1994-01-01

382

NASA Astrophysics Data System (ADS)

Although ultraviolet photosensor devices offer many advantages when used in radiation detectors, there is often a significant reduction in pulse amplitude when the photosensor operates in a detector filled with a noble gas. This is due to the backscattering of electrons by the noble gas atoms. In this study, we investigate the problem of the backscattering of the photoelectrons emitted from a CsI photocathode into Xe, Ar, and Ne and the binary mixtures Xe-Ar, Ar-Ne and Xe-Ne using a detailed Monte Carlo simulation. Results for the photoelectron transmission efficiencies are presented and discussed for the case of a CsI photocathode irradiated with photons with energies in the range Eph = 6.8-9.8 eV (183-127 nm) and for applied reduced electric fields in the range E/N = 1-40 Td. The dependence on incident photon energy, nature of the gas and applied electric field are examined, and the results are explained in terms of electron scattering in the different noble gases.

Dias, T. H. V. T.; Rachinhas, P. J. B. M.; Lopes, J. A. M.; Santos, F. P.; Távora, L. M. N.; Conde, C. A. N.; Stauffer, A. D.

2004-02-01

383

FERMI@Elettra is comprised of two free electron lasers (FELs) that will generate short pulses (tau ~;; 25 to 200 fs) of highly coherent radiation in the XUV and soft X-ray region. The use of external laser seeding together with a harmonic upshift scheme to obtain short wavelengths will give FERMI@Elettra the capability to produce high quality, longitudinal coherent photon pulses. This capability together with the possibilities of temporal synchronization to external lasers and control of the output photon polarization will open new experimental opportunities not possible with currently available FELs. Here we report on the predicted radiation coherence properties and important configuration details of the photon beam transport system. We discuss the several experimental stations that will be available during initial operations in 2011, and we give a scientific perspective on possible experiments that can exploit the critical parameters of this new light source.

Allaria, Enrico; Callegari, Carlo; Cocco, Daniele; Fawley, William M.; Kiskinova, Maya; Masciovecchio, Claudio; Parmigiani, Fulvio

2010-04-05

384

One of the consequences of heroin dependency is a huge expenditure on drugs. This underlying economic expense may be a grave burden for heroin users and may lead to criminal behavior, which is a huge cost to society. The neuropsychological mechanism related to heroin purchase remains unclear. Based on recent findings and the established dopamine hypothesis of addiction, we speculated that expenditure on heroin and central dopamine activity may be associated. A total of 21 heroin users were enrolled in this study. The annual expenditure on heroin was assessed, and the availability of the dopamine transporter (DAT) was assessed by single-photon emission computed tomography (SPECT) using [(99m)TC]TRODAT-1. Parametric and nonparametric correlation analyses indicated that annual expenditure on heroin was significantly and negatively correlated with the availability of striatal DAT. After adjustment for potential confounders, the predictive power of DAT availability was significant. Striatal dopamine function may be associated with opioid purchasing behavior among heroin users, and the cycle of spiraling dysfunction in the dopamine reward system could play a role in this association. PMID:25659472

Lin, Shih-Hsien; Chen, Kao Chin; Lee, Sheng-Yu; Chiu, Nan Tsing; Lee, I Hui; Chen, Po See; Yeh, Tzung Lieh; Lu, Ru-Band; Chen, Chia-Chieh; Liao, Mei-Hsiu; Yang, Yen Kuang

2015-03-30

385

NASA Astrophysics Data System (ADS)

Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a ‘D’-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm2. Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations’ run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator.

Brualla, L.; Mayorga, P. A.; Flühs, A.; Lallena, A. M.; Sempau, J.; Sauerwein, W.

2012-11-01

386

Retinoblastoma is the most common eye tumour in childhood. According to the available long-term data, the best outcome regarding tumour control and visual function has been reached by external beam radiotherapy. The benefits of the treatment are, however, jeopardized by a high incidence of radiation-induced secondary malignancies and the fact that irradiated bones grow asymmetrically. In order to better exploit the advantages of external beam radiotherapy, it is necessary to improve current techniques by reducing the irradiated volume and minimizing the dose to the facial bones. To this end, dose measurements and simulated data in a water phantom are essential. A Varian Clinac 2100 C/D operating at 6 MV is used in conjunction with a dedicated collimator for the retinoblastoma treatment. This collimator conforms a 'D'-shaped off-axis field whose irradiated area can be either 5.2 or 3.1 cm(2). Depth dose distributions and lateral profiles were experimentally measured. Experimental results were compared with Monte Carlo simulations' run with the penelope code and with calculations performed with the analytical anisotropic algorithm implemented in the Eclipse treatment planning system using the gamma test. penelope simulations agree reasonably well with the experimental data with discrepancies in the dose profiles less than 3 mm of distance to agreement and 3% of dose. Discrepancies between the results found with the analytical anisotropic algorithm and the experimental data reach 3 mm and 6%. Although the discrepancies between the results obtained with the analytical anisotropic algorithm and the experimental data are notable, it is possible to consider this algorithm for routine treatment planning of retinoblastoma patients, provided the limitations of the algorithm are known and taken into account by the medical physicist and the clinician. Monte Carlo simulation is essential for knowing these limitations. Monte Carlo simulation is required for optimizing the treatment technique and the dedicated collimator. PMID:23123926

Brualla, L; Mayorga, P A; Flühs, A; Lallena, A M; Sempau, J; Sauerwein, W

2012-11-21

387

Ten previously untreated adults with attention deficit hyperactivity disorder (ADHD) were investigated before and after 4 weeks of treatment with a dose of 3×5 mg methylphenidate\\/d by single photon emission computed tomography (SPECT) with [Tc–99m]TRODAT-1, the first Tc-99m labelled SPECT ligand specifically binding to the dopamine transporter (DAT). For semiquantitative evaluation of the DAT, specific binding ([STR–BKG]\\/BKG) was calculated in

Klaus-Henning Krause; Stefan H Dresel; Johanna Krause; Hank F Kung; Klaus Tatsch

2000-01-01

388

NASA Astrophysics Data System (ADS)

The scaling Monte Carlo method and Gaussian model are applied to simulate the transportation of light beam with arbitrary waist radius. Much of the time, Monte Carlo simulation is performed for pencil or cone beam where the initial status of the photon is identical. In practical application, incident light is always focused on the sample to form approximate Gauss distribution on the surface. With alteration of focus position in the sample, the initial status of the photon will not be identical any more. Using the hyperboloid method, the initial reflect angle and coordinates are generated statistically according to the size of Gaussian waist and focus depth. Scaling calculation is performed with baseline data from standard Monte Carlo simulation. The scaling method incorporated with the Gaussian model was tested, and proved effective over a range of scattering coefficients from 20% to 180% relative to the value used in baseline simulation. In most cases, percentage error was less than 10%. The increasing of focus depth will result in larger error of scaled radial reflectance in the region close to the optical axis. In addition to evaluating accuracy of scaling the Monte Carlo method, this study has given implications for inverse Monte Carlo with arbitrary parameters of optical system.

Lin, Lin; Zhang, Mei

2015-02-01

389

The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia.