Sample records for carlo mc method

  1. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  2. Multilevel Monte Carlo for two phase flow and Buckley–Leverett transport in random heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch

    2013-10-01

    Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less

  3. McStas 1.1: a tool for building neutron Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Lefmann, K.; Nielsen, K.; Tennant, A.; Lake, B.

    2000-03-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron scattering instrument. The method compares well with the analytical calculations of Popovici.

  4. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  5. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  6. Methods for Monte Carlo simulations of biomacromolecules

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2010-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

  7. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Ellis; Derek Gaston; Benoit Forget

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less

  8. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  9. ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2016-10-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  10. Thermodynamics and simulation of hard-sphere fluid and solid: Kinetic Monte Carlo method versus standard Metropolis scheme

    NASA Astrophysics Data System (ADS)

    Ustinov, E. A.

    2017-01-01

    The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.

  11. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  12. Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.

    PubMed

    Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood

    2016-01-01

    Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.

  13. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  14. Calibration of the Top-Quark Monte Carlo Mass.

    PubMed

    Kieseler, Jan; Lipka, Katerina; Moch, Sven-Olaf

    2016-04-22

    We present a method to establish, experimentally, the relation between the top-quark mass m_{t}^{MC} as implemented in Monte Carlo generators and the Lagrangian mass parameter m_{t} in a theoretically well-defined renormalization scheme. We propose a simultaneous fit of m_{t}^{MC} and an observable sensitive to m_{t}, which does not rely on any prior assumptions about the relation between m_{t} and m_{t}^{MC}. The measured observable is independent of m_{t}^{MC} and can be used subsequently for a determination of m_{t}. The analysis strategy is illustrated with examples for the extraction of m_{t} from inclusive and differential cross sections for hadroproduction of top quarks.

  15. A comparative study of history-based versus vectorized Monte Carlo methods in the GPU/CUDA environment for a simple neutron eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.

    2014-06-01

    For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.

  16. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  17. Development of accelerated Raman and fluorescent Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dumont, Alexander P.; Patil, Chetan

    2018-02-01

    Monte Carlo (MC) modeling of photon propagation in turbid media is an essential tool for understanding optical interactions between light and tissue. Insight gathered from outputs of MC models assists in mapping between detected optical signals and bulk tissue optical properties, and as such, has proven useful for inverse calculations of tissue composition and optimization of the design of optical probes. MC models of Raman scattering have previously been implemented without consideration to background autofluorescence, despite its presence in raw measurements. Modeling both Raman and fluorescence profiles at high spectral resolution requires a significant increase in computation, but is more appropriate for investigating issues such as detection limits. We present a new Raman Fluorescence MC model developed atop an existing GPU parallelized MC framework that can run more than 300x times faster than CPU methods. The robust acceleration allows for the efficient production of both Raman and fluorescence outputs from the MC model. In addition, this model can handle arbitrary sample morphologies of excitation and collection geometries to more appropriately mimic experimental settings. We will present the model framework and initial results.

  18. Integration of OpenMC methods into MAMMOTH and Serpent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie; DeHart, Mark; Tumulak, Aaron

    OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.

  19. Top Quark Mass Calibration for Monte Carlo Event Generators.

    PubMed

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W

    2016-12-02

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator m_{t}^{MC}. Because of hadronization and parton-shower dynamics, relating m_{t}^{MC} to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e^{+}e^{-} 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, m_{t}^{MC} differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m_{t}^{MC}≃m_{t,1  GeV}^{MSR}.

  20. Monte Carlo simulations of neutron-scattering instruments using McStas

    NASA Astrophysics Data System (ADS)

    Nielsen, K.; Lefmann, K.

    2000-06-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Risø National Laboratory, includes an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution functions of triple-axis spectrometers, and time-focusing in time-of-flight instruments.

  1. Simulation of gas adsorption on a surface and in slit pores with grand canonical and canonical kinetic Monte Carlo methods.

    PubMed

    Ustinov, E A; Do, D D

    2012-08-21

    We present for the first time in the literature a new scheme of kinetic Monte Carlo method applied on a grand canonical ensemble, which we call hereafter GC-kMC. It was shown recently that the kinetic Monte Carlo (kMC) scheme is a very effective tool for the analysis of equilibrium systems. It had been applied in a canonical ensemble to describe vapor-liquid equilibrium of argon over a wide range of temperatures, gas adsorption on a graphite open surface and in graphitic slit pores. However, in spite of the conformity of canonical and grand canonical ensembles, the latter is more relevant in the correct description of open systems; for example, the hysteresis loop observed in adsorption of gases in pores under sub-critical conditions can only be described with a grand canonical ensemble. Therefore, the present paper is aimed at an extension of the kMC to open systems. The developed GC-kMC was proved to be consistent with the results obtained with the canonical kMC (C-kMC) for argon adsorption on a graphite surface at 77 K and in graphitic slit pores at 87.3 K. We showed that in slit micropores the hexagonal packing in the layers adjacent to the pore walls is observed at high loadings even at temperatures above the triple point of the bulk phase. The potential and applicability of the GC-kMC are further shown with the correct description of the heat of adsorption and the pressure tensor of the adsorbed phase.

  2. Implementation and verification of nuclear interactions in a Monte-Carlo code for the Procom-ProGam proton therapy planning system

    NASA Astrophysics Data System (ADS)

    Kostyuchenko, V. I.; Makarova, A. S.; Ryazantsev, O. B.; Samarin, S. I.; Uglov, A. S.

    2014-06-01

    A great breakthrough in proton therapy has happened in the new century: several tens of dedicated centers are now operated throughout the world and their number increases every year. An important component of proton therapy is a treatment planning system. To make calculations faster, these systems usually use analytical methods whose reliability and accuracy do not allow the advantages of this method of treatment to implement to the full extent. Predictions by the Monte Carlo (MC) method are a "gold" standard for the verification of calculations with these systems. At the Institute of Experimental and Theoretical Physics (ITEP) which is one of the eldest proton therapy centers in the world, an MC code is an integral part of their treatment planning system. This code which is called IThMC was developed by scientists from RFNC-VNIITF (Snezhinsk) under ISTC Project 3563.

  3. Top Quark Mass Calibration for Monte Carlo Event Generators

    DOE PAGES

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; ...

    2016-11-29

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mmore » $$MC\\atop{t}$$. Because of hadronization and parton-shower dynamics, relating m$$MC\\atop{t}$$ to a field theory mass is difficult. Here, we present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e +e −2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to PYTHIA 8.205, m$$MC\\atop{t}$$ differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, m$$MC\\atop{t}$$ ≃ m$$MSR\\atop{t,1 GeV}$$.« less

  4. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations.

    PubMed

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.

  5. MO-E-18C-02: Hands-On Monte Carlo Project Assignment as a Method to Teach Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pater, P; Vallieres, M; Seuntjens, J

    2014-06-15

    Purpose: To present a hands-on project on Monte Carlo methods (MC) recently added to the curriculum and to discuss the students' appreciation. Methods: Since 2012, a 1.5 hour lecture dedicated to MC fundamentals follows the detailed presentation of photon and electron interactions. Students also program all sampling steps (interaction length and type, scattering angle, energy deposit) of a MC photon transport code. A handout structured in a step-by-step fashion guides student in conducting consistency checks. For extra points, students can code a fully working MC simulation, that simulates a dose distribution for 50 keV photons. A kerma approximation to dosemore » deposition is assumed. A survey was conducted to which 10 out of the 14 attending students responded. It compared MC knowledge prior to and after the project, questioned the usefulness of radiation physics teaching through MC and surveyed possible project improvements. Results: According to the survey, 76% of students had no or a basic knowledge of MC methods before the class and 65% estimate to have a good to very good understanding of MC methods after attending the class. 80% of students feel that the MC project helped them significantly to understand simulations of dose distributions. On average, students dedicated 12.5 hours to the project and appreciated the balance between hand-holding and questions/implications. Conclusion: A lecture on MC methods with a hands-on MC programming project requiring about 14 hours was added to the graduate study curriculum since 2012. MC methods produce “gold standard” dose distributions and slowly enter routine clinical work and a fundamental understanding of MC methods should be a requirement for future students. Overall, the lecture and project helped students relate crosssections to dose depositions and presented numerical sampling methods behind the simulation of these dose distributions. Research funding from governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less

  6. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-07

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  7. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  8. Monte Carlo verification of radiotherapy treatments with CloudMC.

    PubMed

    Miras, Hector; Jiménez, Rubén; Perales, Álvaro; Terrón, José Antonio; Bertolet, Alejandro; Ortiz, Antonio; Macías, José

    2018-06-27

    A new implementation has been made on CloudMC, a cloud-based platform presented in a previous work, in order to provide services for radiotherapy treatment verification by means of Monte Carlo in a fast, easy and economical way. A description of the architecture of the application and the new developments implemented is presented together with the results of the tests carried out to validate its performance. CloudMC has been developed over Microsoft Azure cloud. It is based on a map/reduce implementation for Monte Carlo calculations distribution over a dynamic cluster of virtual machines in order to reduce calculation time. CloudMC has been updated with new methods to read and process the information related to radiotherapy treatment verification: CT image set, treatment plan, structures and dose distribution files in DICOM format. Some tests have been designed in order to determine, for the different tasks, the most suitable type of virtual machines from those available in Azure. Finally, the performance of Monte Carlo verification in CloudMC is studied through three real cases that involve different treatment techniques, linac models and Monte Carlo codes. Considering computational and economic factors, D1_v2 and G1 virtual machines were selected as the default type for the Worker Roles and the Reducer Role respectively. Calculation times up to 33 min and costs of 16 € were achieved for the verification cases presented when a statistical uncertainty below 2% (2σ) was required. The costs were reduced to 3-6 € when uncertainty requirements are relaxed to 4%. Advantages like high computational power, scalability, easy access and pay-per-usage model, make Monte Carlo cloud-based solutions, like the one presented in this work, an important step forward to solve the long-lived problem of truly introducing the Monte Carlo algorithms in the daily routine of the radiotherapy planning process.

  9. Monte Carlo simulations in radiotherapy dosimetry.

    PubMed

    Andreo, Pedro

    2018-06-27

    The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".

  10. Top Quark Mass Calibration for Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Butenschoen, Mathias; Dehnadi, Bahman; Hoang, André H.; Mateu, Vicent; Preisser, Moritz; Stewart, Iain W.

    2016-12-01

    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator mtMC. Because of hadronization and parton-shower dynamics, relating mtMC to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e+e- 2-jettiness calculations at next-to-leading-logarithmic and next-to-next-to-leading-logarithmic order to pythia 8.205, mtMC differs from the pole mass by 900 and 600 MeV, respectively, and agrees with the MSR mass within uncertainties, mtMC≃mt,1 GeV MSR .

  11. Kinetic Monte Carlo (kMC) simulation of carbon co-implant on pre-amorphization process.

    PubMed

    Park, Soonyeol; Cho, Bumgoo; Yang, Seungsu; Won, Taeyoung

    2010-05-01

    We report our kinetic Monte Carlo (kMC) study of the effect of carbon co-implant on the pre-amorphization implant (PAL) process. We employed BCA (Binary Collision Approximation) approach for the acquisition of the initial as-implant dopant profile and kMC method for the simulation of diffusion process during the annealing process. The simulation results implied that carbon co-implant suppresses the boron diffusion due to the recombination with interstitials. Also, we could compare the boron diffusion with carbon diffusion by calculating carbon reaction with interstitial. And we can find that boron diffusion is affected from the carbon co-implant energy by enhancing the trapping of interstitial between boron and interstitial.

  12. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  13. Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled GPU.

    PubMed

    Kalantzis, Georgios; Tachibana, Hidenobu

    2014-01-01

    For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Safe bunker designing for the 18 MV Varian 2100 Clinac: a comparison between Monte Carlo simulation based upon data and new protocol recommendations

    PubMed Central

    Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein

    2016-01-01

    Aim The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. Background High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. Materials and methods The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. Results From designed door's thickness, the door designed by the MC simulation and Wu–McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Conclusion Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations. PMID:26900357

  15. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  16. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  17. Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object

    NASA Astrophysics Data System (ADS)

    Benhdech, Yassine; Beaumont, Stéphane; Guédon, Jeanpierre; Crespin, Sylvain

    2011-03-01

    Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects (DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning System. Large dose variations were observed particularly in air and bone equivalent material. In this current work, we discuss the results of the previous paper and provide an explanation for observed dose differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC) were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32- processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between ECLIPSE and MC calculations on the beam axis.

  18. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  19. McSnow: A Monte-Carlo Particle Model for Riming and Aggregation of Ice Particles in a Multidimensional Microphysical Phase Space

    NASA Astrophysics Data System (ADS)

    Brdar, S.; Seifert, A.

    2018-01-01

    We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.

  20. Subtle Monte Carlo Updates in Dense Molecular Systems.

    PubMed

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  1. Study on method to simulate light propagation on tissue with characteristics of radial-beam LED based on Monte-Carlo method.

    PubMed

    Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G

    2013-01-01

    In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.

  2. Monte Carlo simulations on atropisomerism of thienotriazolodiazepines applicable to slow transition phenomena using potential energy surfaces by ab initio molecular orbital calculations.

    PubMed

    Morikami, Kenji; Itezono, Yoshiko; Nishimoto, Masahiro; Ohta, Masateru

    2014-01-01

    Compounds with a medium-sized flexible ring often show atropisomerism that is caused by the high-energy barriers between long-lived conformers that can be isolated and often have different biological properties to each other. In this study, the frequency of the transition between the two stable conformers, aS and aR, of thienotriazolodiazepine compounds with flexible 7-membered rings was estimated computationally by Monte Carlo (MC) simulations and validated experimentally by NMR experiments. To estimate the energy barriers for transitions as precisely as possible, the potential energy (PE) surfaces used in the MC simulations were calculated by molecular orbital (MO) methods. To accomplish the MC simulations with the MO-based PE surfaces in a practical central processing unit (CPU) time, the MO-based PE of each conformer was pre-calculated and stored before the MC simulations, and then only referred to during the MC simulations. The activation energies for transitions calculated by the MC simulations agreed well with the experimental ΔG determined by the NMR experiments. The analysis of the transition trajectories of the MC simulations revealed that the transition occurred not only through the transition states, but also through many different transition paths. Our computational methods gave us quantitative estimates of atropisomerism of the thienotriazolodiazepine compounds in a practical period of time, and the method could be applicable for other slow-dynamics phenomena that cannot be investigated by other atomistic simulations.

  3. Postimplant dosimetry using a Monte Carlo dose calculation engine: a new clinical standard.

    PubMed

    Carrier, Jean-François; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, André-Guy; Vigneault, Eric; Beaulieu, Luc

    2007-07-15

    To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. For the clinical target volume (CTV) D(90) parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.

  4. Estimation of absorbed doses from paediatric cone-beam CT scans: MOSFET measurements and Monte Carlo simulations.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang

    2010-03-01

    The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.

  5. Effect of inhomogeneity in a patient's body on the accuracy of the pencil beam algorithm in comparison to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yamashita, T.; Akagi, T.; Aso, T.; Kimura, A.; Sasaki, T.

    2012-11-01

    The pencil beam algorithm (PBA) is reasonably accurate and fast. It is, therefore, the primary method used in routine clinical treatment planning for proton radiotherapy; still, it needs to be validated for use in highly inhomogeneous regions. In our investigation of the effect of patient inhomogeneity, PBA was compared with Monte Carlo (MC). A software framework was developed for the MC simulation of radiotherapy based on Geant4. Anatomical sites selected for the comparison were the head/neck, liver, lung and pelvis region. The dose distributions calculated by the two methods in selected examples were compared, as well as a dose volume histogram (DVH) derived from the dose distributions. The comparison of the off-center ratio (OCR) at the iso-center showed good agreement between the PBA and MC, while discrepancies were seen around the distal fall-off regions. While MC showed a fine structure on the OCR in the distal fall-off region, the PBA showed smoother distribution. The fine structures in MC calculation appeared downstream of very low-density regions. Comparison of DVHs showed that most of the target volumes were similarly covered, while some OARs located around the distal region received a higher dose when calculated by MC than the PBA.

  6. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning.

    PubMed

    Chetty, Indrin J; Curran, Bruce; Cygler, Joanna E; DeMarco, John J; Ezzell, Gary; Faddegon, Bruce A; Kawrakow, Iwan; Keall, Paul J; Liu, Helen; Ma, C M Charlie; Rogers, D W O; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V

    2007-12-01

    The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and experimental verification of MC dose algorithms. As the MC method is an emerging technology, this report is not meant to be prescriptive. Rather, it is intended as a preliminary report to review the tenets of the MC method and to provide the framework upon which to build a comprehensive program for commissioning and routine quality assurance of MC-based treatment planning systems.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  8. Combined FDTD-Monte Carlo analysis and a novel design for ZnO scintillator rods in polycarbonate membrane for X-ray imaging

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar; Mohammadi, Mohammad

    2017-05-01

    A combination of Finite Difference Time Domain (FDTD) and Monte Carlo (MC) methods is proposed for simulation and analysis of ZnO microscintillators grown in polycarbonate membrane. A planar 10 keV X-ray source irradiating the detector is simulated by MC method, which provides the amount of absorbed X-ray energy in the assembly. The transport of generated UV scintillation light and its propagation in the detector was studied by the FDTD method. Detector responses to different probable scintillation sites and under different energies of X-ray source from 10 to 25 keV are reported. Finally, the tapered geometry for the scintillators is proposed, which shows enhanced spatial resolution in comparison to cylindrical geometry for imaging applications.

  9. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  10. A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications.

    PubMed

    Bush, K; Popescu, I A; Zavgorodni, S

    2008-09-21

    As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described.

  11. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  12. Using Computer-Based "Experiments" in the Analysis of Chemical Reaction Equilibria

    ERIC Educational Resources Information Center

    Li, Zhao; Corti, David S.

    2018-01-01

    The application of the Reaction Monte Carlo (RxMC) algorithm to standard textbook problems in chemical reaction equilibria is discussed. The RxMC method is a molecular simulation algorithm for studying the equilibrium properties of reactive systems, and therefore provides the opportunity to develop computer-based "experiments" for the…

  13. Surface tension and phase coexistence properties of the lattice fluid from a virtual site removal Monte Carlo strategy

    NASA Astrophysics Data System (ADS)

    Provata, Astero; Prassas, Vassilis D.; Theodorou, Doros N.

    1997-10-01

    A thin liquid film of lattice fluid in equilibrium with its vapor is studied in 2 and 3 dimensions with canonical Monte Carlo simulation (MC) and Self-Consistent Field Theory (SCF) in the temperature range 0.45Tc to Tc, where Tc the liquid-gas critical temperature. Extending the approach of Oates et al. [Philos. Mag. B 61, 337 (1990)] to anisotropic systems, we develop a method for the MC computation of the transverse and normal pressure profiles, hence of the surface tension, based on virtual removals of individual sites or blocks of sites from the system. Results from implementation of this new method, obtained at very modest computational cost, are in reasonable agreement with exact values and other MC estimates of the surface tension of the 2-d and 3-d model systems, respectively. SCF estimates of the interfacial density profiles, the surface tension, the vapor pressure curve and the binodal curve compare well with MC results away from Tc, but show the expected deviations at high temperatures.

  14. Monte Carlo based electron treatment planning and cutout output factor calculations

    NASA Astrophysics Data System (ADS)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  15. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  16. Fermi gases with imaginary mass imbalance and the sign problem in Monte-Carlo calculations

    NASA Astrophysics Data System (ADS)

    Roscher, Dietrich; Braun, Jens; Chen, Jiunn-Wei; Drut, Joaquín E.

    2014-05-01

    Fermi gases in strongly coupled regimes are inherently challenging for many-body methods. Although progress has been made analytically, quantitative results require ab initio numerical approaches, such as Monte-Carlo (MC) calculations. However, mass-imbalanced and spin-imbalanced gases are not accessible to MC calculations due to the infamous sign problem. For finite spin imbalance, the problem can be circumvented using imaginary polarizations and analytic continuation, and large parts of the phase diagram then become accessible. We propose to apply this strategy to the mass-imbalanced case, which opens up the possibility to study the associated phase diagram with MC calculations. We perform a first mean-field analysis which suggests that zero-temperature studies, as well as detecting a potential (tri)critical point, are feasible.

  17. TU-EF-304-07: Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z

    2015-06-15

    Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less

  18. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  19. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  20. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  1. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  2. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) of the rare earth elements (REEs) in beneficiation rare earth waste from the gold processing: case study

    NASA Astrophysics Data System (ADS)

    Bieda, Bogusław; Grzesik, Katarzyna

    2017-11-01

    The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.

  3. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  4. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  5. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  6. Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Threemore » of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.« less

  7. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  8. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  9. Estimation of computed tomography dose index in cone beam computed tomography: MOSFET measurements and Monte Carlo simulations.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald

    2010-05-01

    To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.

  10. SU-E-T-397: Evaluation of Planned Dose Distributions by Monte Carlo (0.5%) and Ray Tracing Algorithm for the Spinal Tumors with CyberKnife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, H; Brindle, J; Hepel, J

    2015-06-15

    Purpose: To analyze and evaluate dose distribution between Ray Tracing (RT) and Monte Carlo (MC) algorithms of 0.5% uncertainty on a critical structure of spinal cord and gross target volume and planning target volume. Methods: Twenty four spinal tumor patients were treated with stereotactic body radiotherapy (SBRT) by CyberKnife in 2013 and 2014. The MC algorithm with 0.5% of uncertainty is used to recalculate the dose distribution for the treatment plan of the patients using the same beams, beam directions, and monitor units (MUs). Results: The prescription doses are uniformly larger for MC plans than RT except one case. Upmore » to a factor of 1.19 for 0.25cc threshold volume and 1.14 for 1.2cc threshold volume of dose differences are observed for the spinal cord. Conclusion: The MC recalculated dose distributions are larger than the original MC calculations for the spinal tumor cases. Based on the accuracy of the MC calculations, more radiation dose might be delivered to the tumor targets and spinal cords with the increase prescription dose.« less

  11. Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport

    NASA Astrophysics Data System (ADS)

    Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.

    2017-10-01

    UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.

  12. TH-E-BRE-09: TrueBeam Monte Carlo Absolute Dose Calculations Using Monitor Chamber Backscatter Simulations and Linac-Logged Target Current

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A, Popescu I; Lobo, J; Sawkey, D

    2014-06-15

    Purpose: To simulate and measure radiation backscattered into the monitor chamber of a TrueBeam linac; establish a rigorous framework for absolute dose calculations for TrueBeam Monte Carlo (MC) simulations through a novel approach, taking into account the backscattered radiation and the actual machine output during beam delivery; improve agreement between measured and simulated relative output factors. Methods: The ‘monitor backscatter factor’ is an essential ingredient of a well-established MC absolute dose formalism (the MC equivalent of the TG-51 protocol). This quantity was determined for the 6 MV, 6X FFF, and 10X FFF beams by two independent Methods: (1) MC simulationsmore » in the monitor chamber of the TrueBeam linac; (2) linac-generated beam record data for target current, logged for each beam delivery. Upper head MC simulations used a freelyavailable manufacturer-provided interface to a cloud-based platform, allowing use of the same head model as that used to generate the publicly-available TrueBeam phase spaces, without revealing the upper head design. The MC absolute dose formalism was expanded to allow direct use of target current data. Results: The relation between backscatter, number of electrons incident on the target for one monitor unit, and MC absolute dose was analyzed for open fields, as well as a jaw-tracking VMAT plan. The agreement between the two methods was better than 0.15%. It was demonstrated that the agreement between measured and simulated relative output factors improves across all field sizes when backscatter is taken into account. Conclusion: For the first time, simulated monitor chamber dose and measured target current for an actual TrueBeam linac were incorporated in the MC absolute dose formalism. In conjunction with the use of MC inputs generated from post-delivery trajectory-log files, the present method allows accurate MC dose calculations, without resorting to any of the simplifying assumptions previously made in the TrueBeam MC literature. This work has been partially funded by Varian Medical Systems.« less

  13. GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.

    PubMed

    Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon

    2010-11-01

    Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.

  14. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  15. Applications of Monte Carlo method to nonlinear regression of rheological data

    NASA Astrophysics Data System (ADS)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagaoka, Masataka; Core Research for Evolutional Science and Technology; ESICB, Kyoto University, Kyodai Katsura, Nishikyo-ku, Kyoto 615-8520

    A new efficient hybrid Monte Carlo (MC)/molecular dynamics (MD) reaction method with a rare event-driving mechanism is introduced as a practical ‘atomistic’ molecular simulation of large-scale chemically reactive systems. Starting its demonstrative application to the racemization reaction of (R)-2-chlorobutane in N,N-dimethylformamide solution, several other applications are shown from the practical viewpoint of molecular controlling of complex chemical reactions, stereochemistry and aggregate structures. Finally, I would like to mention the future applications of the hybrid MC/MD reaction method.

  17. SU-E-T-175: Clinical Evaluations of Monte Carlo-Based Inverse Treatment Plan Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chi, Y; Li, Y; Tian, Z

    2015-06-15

    Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine wasmore » used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.« less

  18. Efficient Application of Continuous Fractional Component Monte Carlo in the Reaction Ensemble

    PubMed Central

    2017-01-01

    A new formulation of the Reaction Ensemble Monte Carlo technique (RxMC) combined with the Continuous Fractional Component Monte Carlo method is presented. This method is denoted by serial Rx/CFC. The key ingredient is that fractional molecules of either reactants or reaction products are present and that chemical reactions always involve fractional molecules. Serial Rx/CFC has the following advantages compared to other approaches: (1) One directly obtains chemical potentials of all reactants and reaction products. Obtained chemical potentials can be used directly as an independent check to ensure that chemical equilibrium is achieved. (2) Independent biasing is applied to the fractional molecules of reactants and reaction products. Therefore, the efficiency of the algorithm is significantly increased, compared to the other approaches. (3) Changes in the maximum scaling parameter of intermolecular interactions can be chosen differently for reactants and reaction products. (4) The number of fractional molecules is reduced. As a proof of principle, our method is tested for Lennard-Jones systems at various pressures and for various chemical reactions. Excellent agreement was found both for average densities and equilibrium mixture compositions computed using serial Rx/CFC, RxMC/CFCMC previously introduced by Rosch and Maginn (Journal of Chemical Theory and Computation, 2011, 7, 269–279), and the conventional RxMC approach. The serial Rx/CFC approach is also tested for the reaction of ammonia synthesis at various temperatures and pressures. Excellent agreement was found between results obtained from serial Rx/CFC, experimental results from literature, and thermodynamic modeling using the Peng–Robinson equation of state. The efficiency of reaction trial moves is improved by a factor of 2 to 3 (depending on the system) compared to the RxMC/CFCMC formulation by Rosch and Maginn. PMID:28737933

  19. MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Changho; Yang, Won Sik

    This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less

  20. A calibrated Monte Carlo approach to quantify the impacts of misorientation and different driving forces on texture development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel

    2012-02-01

    A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less

  1. A histogram-free multicanonical Monte Carlo algorithm for the construction of analytical density of states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai

    We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less

  2. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  3. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  4. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less

  5. Neutron track length estimator for GATE Monte Carlo dose calculation in radiotherapy.

    PubMed

    Elazhar, H; Deschler, T; Létang, J M; Nourreddine, A; Arbor, N

    2018-06-20

    The out-of-field dose in radiation therapy is a growing concern in regards to the late side-effects and secondary cancer induction. In high-energy x-ray therapy, the secondary neutrons generated through photonuclear reactions in the accelerator are part of this secondary dose. The neutron dose is currently not estimated by the treatment planning system while it appears to be preponderant for distances greater than 50 cm from the isocenter. Monte Carlo simulation has become the gold standard for accurately calculating the neutron dose under specific treatment conditions but the method is also known for having a slow statistical convergence, which makes it difficult to be used on a clinical basis. The neutron track length estimator, a neutron variance reduction technique inspired by the track length estimator method has thus been developped for the first time in the Monte Carlo code GATE to allow a fast computation of the neutron dose in radiotherapy. The details of its implementation, as well as the comparison of its performances against the analog MC method, are presented here. A gain of time from 15 to 400 can be obtained by our method, with a mean difference in the dose calculation of about 1% in comparison with the analog MC method.

  6. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  7. Poster — Thur Eve — 47: Monte Carlo Simulation of Scp, Sc and Sp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.

    The in-water output ratio (Scp), in-air output ratio (Sc), and phantom scattering factor (Sp) are important parameters for radiotherapy dose calculation. Experimentally, Scp is obtained by measuring the dose rate ratio in water phantom, and Sc the water Kerma rate ratio in air. There is no method that allows direct measurement of Sp. Monte Carlo (MC) method has been used to simulate Scp and Sc in literatures, similar to experimental setup, but no MC direct simulation of Sp available yet to the best of our knowledge. We propose in this report a method of performing direct MC simulation of Sp.more » Starting from the definition, we derived that Sp of a clinical photon beam can be approximated by the ratio of the dose rates contributed from the primary beam for a given field size to the reference field size. Since only the primary beam is used, any Linac head scattering should be excluded from the simulation, which can be realized by using the incident electron as a scoring parameter for MU. We performed MC simulations for Scp, Sc and Sp. Scp matches well with golden beam data. Sp obtained by the proposed method agrees well with what is obtained using the traditional method, Sp=Scp/Sc. Since the smaller the field size, the more the primary beam dominates, our Sp simulation method is accurate for small field. By analyzing the calculated data, we found that this method can be used with no problem for large fields. The difference it introduced is clinically insignificant.« less

  8. Monte Carlo MP2 on Many Graphical Processing Units.

    PubMed

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  9. Efficient gradient-based Monte Carlo simulation of materials: Applications to amorphous Si and Fe and Ni clusters

    NASA Astrophysics Data System (ADS)

    Limbu, Dil; Biswas, Parthapratim

    We present a simple and efficient Monte-Carlo (MC) simulation of Iron (Fe) and Nickel (Ni) clusters with N =5-100 and amorphous Silicon (a-Si) starting from a random configuration. Using Sutton-Chen and Finnis-Sinclair potentials for Ni (in fcc lattice) and Fe (in bcc lattice), and Stillinger-Weber potential for a-Si, respectively, the total energy of the system is optimized by employing MC moves that include both the stochastic nature of MC simulations and the gradient of the potential function. For both iron and nickel clusters, the energy of the configurations is found to be very close to the values listed in the Cambridge Cluster Database, whereas the maximum force on each cluster is found to be much lower than the corresponding value obtained from the optimized structural configurations reported in the database. An extension of the method to model the amorphous state of Si is presented and the results are compared with experimental data and those obtained from other simulation methods. The work is partially supported by the NSF under Grant Number DMR 1507166.

  10. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  11. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  12. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  13. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  14. Patient-specific Monte Carlo-based dose-kernel approach for inverse planning in afterloading brachytherapy.

    PubMed

    D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc

    2011-12-01

    Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. A preliminary study of in-house Monte Carlo simulations: an integrated Monte Carlo verification system.

    PubMed

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  16. Daylighting Strategies for U. S. Air Force Office Facilities: Economic Analysis of Building Energy Performance and Life-Cycle Cost Modeling with Monte Carlo Method

    DTIC Science & Technology

    2009-03-26

    annually ( McHugh , et al., 1998). USAF has used daylighting as an energy savings strategy in earlier studies (Holtz, 1990); and is pursuing it to meet...using renewable energy to generate electricity ( McHugh , et al., 1998). For example, traditional utility systems that are straining to meet peak...1998) found that lighting accounts for 40-50% of commercial energy consumption and McHugh , Burns, and Hittle (1998) stated that electric lighting and

  17. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  18. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    PubMed

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by successfully running it on a variety of different computing devices including an NVidia GPU card, two AMD GPU cards and an Intel CPU processor. Computational efficiency among these platforms was compared.

  19. Game of Life on the Equal Degree Random Lattice

    NASA Astrophysics Data System (ADS)

    Shao, Zhi-Gang; Chen, Tao

    2010-12-01

    An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.

  20. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  1. Head-and-neck IMRT treatments assessed with a Monte Carlo dose calculation engine.

    PubMed

    Seco, J; Adams, E; Bidmead, M; Partridge, M; Verhaegen, F

    2005-03-07

    IMRT is frequently used in the head-and-neck region, which contains materials of widely differing densities (soft tissue, bone, air-cavities). Conventional methods of dose computation for these complex, inhomogeneous IMRT cases involve significant approximations. In the present work, a methodology for the development, commissioning and implementation of a Monte Carlo (MC) dose calculation engine for intensity modulated radiotherapy (MC-IMRT) is proposed which can be used by radiotherapy centres interested in developing MC-IMRT capabilities for research or clinical evaluations. The method proposes three levels for developing, commissioning and maintaining a MC-IMRT dose calculation engine: (a) development of a MC model of the linear accelerator, (b) validation of MC model for IMRT and (c) periodic quality assurance (QA) of the MC-IMRT system. The first step, level (a), in developing an MC-IMRT system is to build a model of the linac that correctly predicts standard open field measurements for percentage depth-dose and off-axis ratios. Validation of MC-IMRT, level (b), can be performed in a rando phantom and in a homogeneous water equivalent phantom. Ultimately, periodic quality assurance of the MC-IMRT system is needed to verify the MC-IMRT dose calculation system, level (c). Once the MC-IMRT dose calculation system is commissioned it can be applied to more complex clinical IMRT treatments. The MC-IMRT system implemented at the Royal Marsden Hospital was used for IMRT calculations for a patient undergoing treatment for primary disease with nodal involvement in the head-and-neck region (primary treated to 65 Gy and nodes to 54 Gy), while sparing the spinal cord, brain stem and parotid glands. Preliminary MC results predict a decrease of approximately 1-2 Gy in the median dose of both the primary tumour and nodal volumes (compared with both pencil beam and collapsed cone). This is possibly due to the large air-cavity (the larynx of the patient) situated in the centre of the primary PTV and the approximations present in the dose calculation.

  2. Development of an effective dose coefficient database using a computational human phantom and Monte Carlo simulations to evaluate exposure dose for the usage of NORM-added consumer products.

    PubMed

    Yoo, Do Hyeon; Shin, Wook-Geun; Lee, Jaekook; Yeom, Yeon Soo; Kim, Chan Hyeong; Chang, Byung-Uck; Min, Chul Hee

    2017-11-01

    After the Fukushima accident in Japan, the Korean Government implemented the "Act on Protective Action Guidelines Against Radiation in the Natural Environment" to regulate unnecessary radiation exposure to the public. However, despite the law which came into effect in July 2012, an appropriate method to evaluate the equivalent and effective doses from naturally occurring radioactive material (NORM) in consumer products is not available. The aim of the present study is to develop and validate an effective dose coefficient database enabling the simple and correct evaluation of the effective dose due to the usage of NORM-added consumer products. To construct the database, we used a skin source method with a computational human phantom and Monte Carlo (MC) simulation. For the validation, the effective dose was compared between the database using interpolation method and the original MC method. Our result showed a similar equivalent dose across the 26 organs and a corresponding average dose between the database and the MC calculations of < 5% difference. The differences in the effective doses were even less, and the result generally show that equivalent and effective doses can be quickly calculated with the database with sufficient accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. MUFFSgenMC: An Open Source MUon Flexible Framework for Spectral GENeration for Monte Carlo Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzidakis, Stylianos; Greulich, Christopher

    A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.

  4. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  5. Monte Carlo decision curve analysis using aggregate data.

    PubMed

    Hozo, Iztok; Tsalatsanis, Athanasios; Djulbegovic, Benjamin

    2017-02-01

    Decision curve analysis (DCA) is an increasingly used method for evaluating diagnostic tests and predictive models, but its application requires individual patient data. The Monte Carlo (MC) method can be used to simulate probabilities and outcomes of individual patients and offers an attractive option for application of DCA. We constructed a MC decision model to simulate individual probabilities of outcomes of interest. These probabilities were contrasted against the threshold probability at which a decision-maker is indifferent between key management strategies: treat all, treat none or use predictive model to guide treatment. We compared the results of DCA with MC simulated data against the results of DCA based on actual individual patient data for three decision models published in the literature: (i) statins for primary prevention of cardiovascular disease, (ii) hospice referral for terminally ill patients and (iii) prostate cancer surgery. The results of MC DCA and patient data DCA were identical. To the extent that patient data DCA were used to inform decisions about statin use, referral to hospice or prostate surgery, the results indicate that MC DCA could have also been used. As long as the aggregate parameters on distribution of the probability of outcomes and treatment effects are accurately described in the published reports, the MC DCA will generate indistinguishable results from individual patient data DCA. We provide a simple, easy-to-use model, which can facilitate wider use of DCA and better evaluation of diagnostic tests and predictive models that rely only on aggregate data reported in the literature. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  6. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  7. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  8. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  9. Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method

    PubMed Central

    2016-01-01

    A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709

  10. Two-dimensional molecular line transfer for a cometary coma

    NASA Astrophysics Data System (ADS)

    Szutowicz, S.

    2017-09-01

    In the proposed axisymmetric model of the cometary coma the gas density profile is described by an angular density function. Three methods for treating two-dimensional radiative transfer are compared: the Large Velocity Gradient (LVG) (the Sobolev method), Accelerated Lambda Iteration (ALI) and accelerated Monte Carlo (MC).

  11. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  12. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abouelnasr, MKF; Smit, B

    2012-01-01

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) Amore » new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective-diffusion behavior that was reproduced with kMC simulations.« less

  13. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases.

    PubMed

    Abouelnasr, Mahmoud K F; Smit, Berend

    2012-09-07

    The self- and collective-diffusion behaviors of adsorbed methane, helium, and isobutane in zeolite frameworks LTA, MFI, AFI, and SAS were examined at various concentrations using a range of molecular simulation techniques including Molecular Dynamics (MD), Monte Carlo (MC), Bennett-Chandler (BC), and kinetic Monte Carlo (kMC). This paper has three main results. (1) A novel model for the process of adsorbate movement between two large cages was created, allowing the formulation of a mixing rule for the re-crossing coefficient between two cages of unequal loading. The predictions from this mixing rule were found to agree quantitatively with explicit simulations. (2) A new approach to the dynamically corrected Transition State Theory method to analytically calculate self-diffusion properties was developed, explicitly accounting for nanoscale fluctuations in concentration. This approach was demonstrated to quantitatively agree with previous methods, but is uniquely suited to be adapted to a kMC simulation that can simulate the collective-diffusion behavior. (3) While at low and moderate loadings the self- and collective-diffusion behaviors in LTA are observed to coincide, at higher concentrations they diverge. A change in the adsorbate packing scheme was shown to cause this divergence, a trait which is replicated in a kMC simulation that explicitly models this behavior. These phenomena were further investigated for isobutane in zeolite MFI, where MD results showed a separation in self- and collective- diffusion behavior that was reproduced with kMC simulations.

  14. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.

    PubMed

    Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G

    2014-12-01

    Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.

  15. SU-E-T-491: Importance of Energy Dependent Protons Per MU Calibration Factors in IMPT Dose Calculations Using Monte Carlo Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randeniya, S; Mirkovic, D; Titt, U

    2014-06-01

    Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Ofmore » the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations. Research supported by National Cancer Institute grant P01CA021239.« less

  16. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  17. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  18. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  19. Comparison of doses calculated by the Monte Carlo method and measured by LiF TLD in the buildup region for a 60Co photon beam.

    PubMed

    Budanec, M; Knezević, Z; Bokulić, T; Mrcela, I; Vrtar, M; Vekić, B; Kusić, Z

    2008-12-01

    This work studied the percent depth doses of (60)Co photon beams in the buildup region of a plastic phantom by LiF TLD measurements and by Monte Carlo calculations. An agreement within +/-1.5% was found between PDDs measured by TLD and calculated by the Monte Carlo method with the TLD in a plastic phantom. The dose in the plastic phantom was scored in voxels, with thickness scaled by physical and electron density. PDDs calculated by electron density scaling showed a better match with PDD(TLD)(MC); the difference is within +/-1.5% in the buildup region for square and rectangular field sizes.

  20. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  1. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  2. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  3. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  4. MC-TESTER: a universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    NASA Astrophysics Data System (ADS)

    Golonka, P.; Pierzchała, T.; Waş, Z.

    2004-02-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.

  5. MC21 analysis of the MIT PWR benchmark: Hot zero power results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.

    2013-07-01

    MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less

  6. CloudMC: a cloud computing application for Monte Carlo simulation.

    PubMed

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  7. Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of orthogonal frequency division multiplexing (OFDM) and time-domain spreading, while multi-carrier code division multiple access (MC-CDMA) is a combination of OFDM and frequency-domain spreading. In MC-CDMA, a good bit error rate (BER) performance can be achieved by using frequency-domain equalization (FDE), since the frequency diversity gain is obtained. On the other hand, the conventional orthogonal MC DS-CDMA fails to achieve any frequency diversity gain. In this paper, we propose a new orthogonal MC DS-CDMA that can obtain the frequency diversity gain by applying FDE. The conditional BER analysis is presented. The theoretical average BER performance in a frequency-selective Rayleigh fading channel is evaluated by the Monte-Carlo numerical computation method using the derived conditional BER and is confirmed by computer simulation of the orthogonal MC DS-CDMA signal transmission.

  8. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  9. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.

    PubMed

    Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A

    2005-07-21

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  10. Leveraging Gibbs Ensemble Molecular Dynamics and Hybrid Monte Carlo/Molecular Dynamics for Efficient Study of Phase Equilibria.

    PubMed

    Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi

    2016-11-08

    We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.

  11. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  12. Comparison of film measurements and Monte Carlo simulations of dose delivered with very high-energy electron beams in a polystyrene phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazalova-Carter, Magdalena; Liu, Michael; Palma, Bianey

    2015-04-15

    Purpose: To measure radiation dose in a water-equivalent medium from very high-energy electron (VHEE) beams and make comparisons to Monte Carlo (MC) simulation results. Methods: Dose in a polystyrene phantom delivered by an experimental VHEE beam line was measured with Gafchromic films for three 50 MeV and two 70 MeV Gaussian beams of 4.0–6.9 mm FWHM and compared to corresponding MC-simulated dose distributions. MC dose in the polystyrene phantom was calculated with the EGSnrc/BEAMnrc and DOSXYZnrc codes based on the experimental setup. Additionally, the effect of 2% beam energy measurement uncertainty and possible non-zero beam angular spread on MC dosemore » distributions was evaluated. Results: MC simulated percentage depth dose (PDD) curves agreed with measurements within 4% for all beam sizes at both 50 and 70 MeV VHEE beams. Central axis PDD at 8 cm depth ranged from 14% to 19% for the 5.4–6.9 mm 50 MeV beams and it ranged from 14% to 18% for the 4.0–4.5 mm 70 MeV beams. MC simulated relative beam profiles of regularly shaped Gaussian beams evaluated at depths of 0.64 to 7.46 cm agreed with measurements to within 5%. A 2% beam energy uncertainty and 0.286° beam angular spread corresponded to a maximum 3.0% and 3.8% difference in depth dose curves of the 50 and 70 MeV electron beams, respectively. Absolute dose differences between MC simulations and film measurements of regularly shaped Gaussian beams were between 10% and 42%. Conclusions: The authors demonstrate that relative dose distributions for VHEE beams of 50–70 MeV can be measured with Gafchromic films and modeled with Monte Carlo simulations to an accuracy of 5%. The reported absolute dose differences likely caused by imperfect beam steering and subsequent charge loss revealed the importance of accurate VHEE beam control and diagnostics.« less

  13. Designing new guides and instruments using McStas

    NASA Astrophysics Data System (ADS)

    Farhi, E.; Hansen, T.; Wildes, A.; Ghosh, R.; Lefmann, K.

    With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments [1]. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers [2]. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at the Institut Laue-Langevin. Gain factors ranging from two to five may be obtained for the integrated intensities, depending on the exact geometry, the guide coatings and the source.

  14. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executedmore » in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.« less

  15. Hybrid computer optimization of systems with random parameters

    NASA Technical Reports Server (NTRS)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  16. Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal

    NASA Astrophysics Data System (ADS)

    Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.

    2014-05-01

    Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).

  17. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  18. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  19. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  20. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Shen, C; Tian, Z

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less

  1. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams

    NASA Astrophysics Data System (ADS)

    Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.

    2005-07-01

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  2. TU-H-CAMPUS-IeP1-04: Combined Organ Dose for Digital Subtraction Angiography and Computed Tomography Using Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakabe, D; Ohno, T; Araki, F

    Purpose: The purpose of this study was to evaluate the combined organ dose of digital subtraction angiography (DSA) and computed tomography (CT) using a Monte Carlo (MC) simulation on the abdominal intervention. Methods: The organ doses for DSA and CT were obtained with MC simulation and actual measurements using fluorescent-glass dosimeters at 7 abdominal portions in an Alderson-Rando phantom. DSA was performed from three directions: posterior anterior (PA), right anterior oblique (RAO), and left anterior oblique (LAO). The organ dose with MC simulation was compared with actual radiation dose measurements. Calculations for the MC simulation were carried out with themore » GMctdospp (IMPS, Germany) software based on the EGSnrc MC code. Finally, the combined organ dose for DSA and CT was calculated from the MC simulation using the X-ray conditions of a patient with a diagnosis of hepatocellular carcinoma. Results: For DSA from the PA direction, the organ doses for the actual measurements and MC simulation were 2.2 and 2.4 mGy/100 mAs at the liver, respectively, and 3.0 and 3.1 mGy/100 mAs at the spinal cord, while for CT, the organ doses were 15.2 and 15.1 mGy/100 mAs at the liver, and 14.6 and 13.5 mGy/100 mAs at the spinal cord. The maximum difference in organ dose between the actual measurements and the MC simulation was 11.0% of the spleen at PA, 8.2% of the spinal cord at RAO, and 6.1% of left kidney at LAO with DSA and 9.3% of the stomach with CT. The combined organ dose (4 DSAs and 6 CT scans) with the use of actual patient conditions was found to be 197.4 mGy for the liver and 205.1 mGy for the spinal cord. Conclusion: Our method makes it possible to accurately assess the organ dose to patients for abdominal intervention with combined DSA and CT.« less

  3. SU-G-TeP4-04: An Automated Monte Carlo Based QA Framework for Pencil Beam Scanning Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, J; Jee, K; Clasie, B

    2016-06-15

    Purpose: Prior to treating new PBS field, multiple (three) patient-field-specific QA measurements are performed: two 2D dose distributions at shallow depth (M1) and at the tumor depth (M2) with treatment hardware at zero gantry angle; one 2D dose distribution at iso-center (M3) without patient specific devices at the planned gantry angle. This patient-specific QA could be simplified by the use of MC model. The results of MC model commissioning for a spot-scanning system and the fully automated TOPAS/MC-based QA framework will be presented. Methods: We have developed in-house MC interface to access a TPS (Astroid) database from a computer clustermore » remotely. Once a plan is identified, the interface downloads information for the MC simulations, such as patient images, apertures points, and fluence maps and initiates calculations in both the patient and QA geometries. The resulting calculations are further analyzed to evaluate the TPS dose accuracy and the PBS delivery. Results: The Monte Carlo model of our system was validated within 2.0 % accuracy over the whole range of the dose distribution (proximal/shallow part, as well as target dose part) due to the location of the measurements. The averaged range difference after commissioning was 0.25 mm over entire treatment ranges, e.g., 6.5 cm to 31.6 cm. Conclusion: As M1 depths range typically from 1 cm to 4 cm from the phantom surface, The Monte Carlo model of our system was validated within +− 2.0 % in absolute dose level over a whole treatment range. The averaged range difference after commissioning was 0.25 mm over entire treatment ranges, e.g., 6.5 cm to 31.6 cm. This work was supported by NIH/NCI under CA U19 21239.« less

  4. SU-E-T-489: Quantum versus Classical Trajectory Monte Carlo Simulations of Low Energy Electron Transport.

    PubMed

    Thomson, R; Kawrakow, I

    2012-06-01

    Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.

  5. SU-F-T-365: Clinical Commissioning of the Monaco Treatment Planning System for the Novalis Tx to Deliver VMAT, SRS and SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adnani, N

    Purpose: To commission the Monaco Treatment Planning System for the Novalis Tx machine. Methods: The commissioning of Monte-Carlo (MC), Collapsed Cone (CC) and electron Monte-Carlo (eMC) beam models was performed through a series of measurements and calculations in medium and in water. In medium measurements relied Octavius 4D QA system with the 1000 SRS detector array for field sizes less than 4 cm × 4 cm and the 1500 detector array for larger field sizes. Heterogeneity corrections were validated using a custom built phantom. Prior to clinical implementation, an end to end testing of a Prostate and H&N VMAT plansmore » was performed. Results: Using a 0.5% uncertainty and 2 mm grid sizes, Tables I and II summarize the MC validation at 6 MV and 18 MV in both medium and water. Tables III and IV show similar comparisons for CC. Using the custom heterogeneity phantom setup of Figure 1 and IGRT guidance summarized in Figure 2, Table V lists the percent pass rate for a 2%, 2 mm gamma criteria at 6 and 18 MV for both MC and CC. The relationship between MC calculations settings of uncertainty and grid size and the gamma passing rate for a prostate and H&N case is shown in Table VI. Table VII lists the results of the eMC calculations compared to measured data for clinically available applicators and Table VIII for small field cutouts. Conclusion: MU calculations using MC are highly sensitive to uncertainty and grid size settings. The difference can be of the order of several per cents. MC is superior to CC for small fields and when using heterogeneity corrections, regardless of field size, making it more suitable for SRS, SBRT and VMAT deliveries. eMC showed good agreement with measurements down to 2 cm − 2 cm field size.« less

  6. Efficient Implementation of MrBayes on Multi-GPU

    PubMed Central

    Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang

    2013-01-01

    MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)3), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)3 Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)3 (aMCMCMC) for MrBayes (MC)3 on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new “node-by-node” task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)3 achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)3 is dramatically faster than all the previous (MC)3 algorithms and scales well to large GPU clusters. PMID:23493260

  7. Efficient implementation of MrBayes on multi-GPU.

    PubMed

    Bao, Jie; Xia, Hongju; Zhou, Jianfu; Liu, Xiaoguang; Wang, Gang

    2013-06-01

    MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

  8. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms

    NASA Astrophysics Data System (ADS)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.

  9. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    NASA Astrophysics Data System (ADS)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs mit.edu)

  10. Validation of GPU-accelerated superposition-convolution dose computations for the Small Animal Radiation Research Platform.

    PubMed

    Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John

    2018-05-01

    The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  12. NOTE: MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    NASA Astrophysics Data System (ADS)

    Alexander, A.; DeBlois, F.; Stroian, G.; Al-Yahya, K.; Heath, E.; Seuntjens, J.

    2007-07-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM_RT, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform-independent, large-scale MC treatment planning for different treatment sites. Patient recalculations were performed to validate the software and ensure proper functionality.

  13. UNCERTAINTY AND SENSITIVITY ANALYSIS OF RUNOFF AND SEDIMENT YIELD IN A SMALL AGRICULTURAL WATERSHED WITH KINEROS2

    EPA Science Inventory

    Using the Monte Carlo (MC) method, this paper derives arithmetic and geometric means and associated variances of the net capillary drive parameter, G, that appears in the Parlange infiltration model, as a function of soil texture and antecedent soil moisture content. App...

  14. A framework for simulating map error in ecosystem models

    Treesearch

    Sean P. Healey; Shawn P. Urbanski; Paul L. Patterson; Chris Garrard

    2014-01-01

    The temporal depth and spatial breadth of observations from platforms such as Landsat provide unique perspective on ecosystem dynamics, but the integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential map errors in broader...

  15. 3D quantitative photoacoustic image reconstruction using Monte Carlo method and linearization

    NASA Astrophysics Data System (ADS)

    Okawa, Shinpei; Hirasawa, Takeshi; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Ishihara, Miya

    2018-02-01

    To quantify the functional and structural information of peripheral blood vessels for diagnoses of diseases which affects peripheral blood vessels such as diabetes and peripheral vascular disease, a 3D quantitative photoacoustic tomography (QPAT) reconstructing the optical properties such as the absorption coefficient reflecting microvascular structures and hemoglobin concentration and oxygenation saturation is studied. QPAT image reconstruction algorithms based on radiative transfer equation (RTE) and photon diffusion equation (PDE) have been proposed. However, it is not easy to use RTE in the clinical practice because of the huge computational load and long calculation time. On the other hand, it is always considered problematic to use PDE, because it does not approximate RTE well near the illuminating position. In this study, we developed the 3D QPAT image reconstruction using Monte Carlo (MC) method which approximates RTE better than PDE to reconstruct the optical properties in the region near the illuminating surface. To reduce the calculation time, we applied linearization. The QPAT image reconstruction algorithm with MC method and linearization was examined in numerical simulations and phantom experiment by use of a scanning system with a single probe consisting of P(VDF-TrFE) piezo electric film and optical fiber.

  16. Effect of Gold Nanoparticles on Prostate Dose Distribution under Ir-192 Internal and 18 MV External Radiotherapy Procedures Using Gel Dosimetry and Monte Carlo Method.

    PubMed

    Khosravi, H; Hashemi, B; Mahdavi, S R; Hejazi, P

    2015-03-01

    Gel polymers are considered as new dosimeters for determining radiotherapy dose distribution in three dimensions. The ability of a new formulation of MAGIC-f polymer gel was assessed by experimental measurement and Monte Carlo (MC) method for studying the effect of gold nanoparticles (GNPs) in prostate dose distributions under the internal Ir-192 and external 18MV radiotherapy practices. A Plexiglas phantom was made representing human pelvis. The GNP shaving 15 nm in diameter and 0.1 mM concentration were synthesized using chemical reduction method. Then, a new formulation of MAGIC-f gel was synthesized. The fabricated gel was poured in the tubes located at the prostate (with and without the GNPs) and bladder locations of the phantom. The phantom was irradiated to an Ir-192 source and 18 MV beam of a Varian linac separately based on common radiotherapy procedures used for prostate cancer. After 24 hours, the irradiated gels were read using a Siemens 1.5 Tesla MRI scanner. The absolute doses at the reference points and isodose curves resulted from the experimental measurement of the gels and MC simulations following the internal and external radiotherapy practices were compared. The mean absorbed doses measured with the gel in the presence of the GNPs in prostate were 15% and 8 % higher than the corresponding values without the GNPs under the internal and external radiation therapies, respectively. MC simulations also indicated a dose increase of 14 % and 7 % due to presence of the GNPs, for the same experimental internal and external radiotherapy practices, respectively. There was a good agreement between the dose enhancement factors (DEFs) estimated with MC simulations and experiment gel measurements due to the GNPs. The results indicated that the polymer gel dosimetry method as developed and used in this study, can be recommended as a reliable method for investigating the DEF of GNPs in internal and external radiotherapy practices.

  17. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  18. FAST TRACK COMMUNICATION: The origin of Bohm diffusion, investigated by a comparison of different modelling methods

    NASA Astrophysics Data System (ADS)

    Bultinck, E.; Mahieu, S.; Depla, D.; Bogaerts, A.

    2010-07-01

    'Bohm diffusion' causes the electrons to diffuse perpendicularly to the magnetic field lines. However, its origin is not yet completely understood: low and high frequency electric field fluctuations are both named to cause Bohm diffusion. The importance of including this process in a Monte Carlo (MC) model is demonstrated by comparing calculated ionization rates with particle-in-cell/Monte Carlo collisions (PIC/MCC) simulations. A good agreement is found with a Bohm diffusion parameter of 0.05, which corresponds well to experiments. Since the PIC/MCC method accounts for fast electric field fluctuations, we conclude that Bohm diffusion is caused by fast electric field phenomena.

  19. SU-E-T-627: Precision Modelling of the Leaf-Bank Rotation in Elekta’s Agility MLC: Is It Necessary?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vujicic, M; Belec, J; Heath, E

    Purpose: To demonstrate the method used to determine the leaf bank rotation angle (LBROT) as a parameter for modeling the Elekta Agility multi-leaf collimator (MLC) for Monte Carlo simulations and to evaluate the clinical impact of LBROT. Methods: A detailed model of an Elekta Infinity linac including an Agility MLC was built using the EGSnrc/BEAMnrc Monte Carlo code. The Agility 160-leaf MLC is modelled using the MLCE component module which allows for leaf bank rotation using the parameter LBROT. A precise value of LBROT is obtained by comparing measured and simulated profiles of a specific field, which has leaves arrangedmore » in a repeated pattern such that one leaf is opened and the adjacent one is closed. Profile measurements from an Agility linac are taken with gafchromic film, and an ion chamber is used to set the absolute dose. The measurements are compared to Monte Carlo (MC) simulations and the LBROT is adjusted until a match is found. The clinical impact of LBROT is evaluated by observing how an MC dose calculation changes with LBROT. A clinical Stereotactic Body Radiation Treatment (SBRT) plan is calculated using BEAMnrc/DOSXYZnrc simulations with different input values for LBROT. Results: Using the method outlined above, the LBROT is determined to be 9±1 mrad. Differences as high as 4% are observed in a clinical SBRT plan between the extreme case (LBROT not modeled) and the nominal case. Conclusion: In small-field radiation therapy treatment planning, it is important to properly account for LBROT as an input parameter for MC dose calculations with the Agility MLC. More work is ongoing to elucidate the observed differences by determining the contributions from transmission dose, change in field size, and source occlusion, which are all dependent on LBROT. This work was supported by OCAIRO (Ontario Consortium of Adaptive Interventions in Radiation Oncology), funded by the Ontario Research Fund.« less

  20. SU-F-T-81: Treating Nose Skin Using Energy and Intensity Modulated Electron Beams with Monte Carlo Based Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, L; Fan, J; Eldib, A

    Purpose: Treating nose skin with an electron beam is of a substantial challenge due to uneven nose surfaces and tissue heterogeneity, and consequently could have a great uncertainty of dose accuracy on the target. This work explored the method using Monte Carlo (MC)-based energy and intensity modulated electron radiotherapy (MERT), which would be delivered with a photon MLC in a standard medical linac (Artiste). Methods: The traditional treatment on the nose skin involves the usage of a bolus, often with a single energy electron beam. This work avoided using the bolus, and utilized mixed energies of electron beams. An in-housemore » developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. A clinical case of the nose skin, which was previously treated with a single 9 MeV electron beam, was replanned with the MERT method. The resultant dose distributions were compared with the plan previously clinically used. The dose volume histogram of the MERT plan is calculated to examine the coverage of the planning target volume (PTV) and critical structure doses. Results: The target coverage and conformality in the MERT plan are improved as compared to the conventional plan. The MERT can provide more sufficient target coverage and less normal tissue dose underneath the nose skin. Conclusion: Compared to the conventional treatment technique, using MERT for the nose skin treatment has shown the dosimetric advantages in the PTV coverage and conformality. In addition, this technique eliminates the necessity of the cutout and bolus, which makes the treatment more efficient and accurate.« less

  1. Monte Carlo modeling of HD120 multileaf collimator on Varian TrueBeam linear accelerator for verification of 6X and 6X FFF VMAT SABR treatment plans

    PubMed Central

    Gete, Ermias; Duzenli, Cheryl; Teke, Tony

    2014-01-01

    A Monte Carlo (MC) validation of the vendor‐supplied Varian TrueBeam 6 MV flattened (6X) phase‐space file and the first implementation of the Siebers‐Keall MC MLC model as applied to the HD120 MLC (for 6X flat and 6X flattening filterfree (6X FFF) beams) are described. The MC model is validated in the context of VMAT patient‐specific quality assurance. The Monte Carlo commissioning process involves: 1) validating the calculated open‐field percentage depth doses (PDDs), profiles, and output factors (OF), 2) adapting the Siebers‐Keall MLC model to match the new HD120‐MLC geometry and material composition, 3) determining the absolute dose conversion factor for the MC calculation, and 4) validating this entire linac/MLC in the context of dose calculation verification for clinical VMAT plans. MC PDDs for the 6X beams agree with the measured data to within 2.0% for field sizes ranging from 2 × 2 to 40 × 40 cm2. Measured and MC profiles show agreement in the 50% field width and the 80%‐20% penumbra region to within 1.3 mm for all square field sizes. MC OFs for the 2 to 40 cm2 square fields agree with measurement to within 1.6%. Verification of VMAT SABR lung, liver, and vertebra plans demonstrate that measured and MC ion chamber doses agree within 0.6% for the 6X beam and within 2.0% for the 6X FFF beam. A 3D gamma factor analysis demonstrates that for the 6X beam, > 99% of voxels meet the pass criteria (3%/3 mm). For the 6X FFF beam, > 94% of voxels meet this criteria. The TrueBeam accelerator delivering 6X and 6X FFF beams with the HD120 MLC can be modeled in Monte Carlo to provide an independent 3D dose calculation for clinical VMAT plans. This quality assurance tool has been used clinically to verify over 140 6X and 16 6X FFF TrueBeam treatment plans. PACS number: 87.55.K‐ PMID:24892341

  2. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  3. OpenMC In Situ Source Convergence Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less

  4. Self-learning Monte Carlo with deep neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  5. PyMC: Bayesian Stochastic Modelling in Python

    PubMed Central

    Patil, Anand; Huard, David; Fonnesbeck, Christopher J.

    2010-01-01

    This user guide describes a Python package, PyMC, that allows users to efficiently code a probabilistic model and draw samples from its posterior distribution using Markov chain Monte Carlo techniques. PMID:21603108

  6. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  7. A Comparison of Experimental EPMA Data and Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2004-01-01

    Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.

  8. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  9. Assessment of Person Fit Using Resampling-Based Approaches

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…

  10. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  11. The effect of voxel size on dose distribution in Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Yani, Sitti; Dirgayussa, I. Gde E.; Rhani, Moh. Fadhillah; Haryanto, Freddy; Arif, Idam

    2015-09-01

    Recently, Monte Carlo (MC) calculation method has reported as the most accurate method of predicting dose distributions in radiotherapy. The MC code system (especially DOSXYZnrc) has been used to investigate the different voxel (volume elements) sizes effect on the accuracy of dose distributions. To investigate this effect on dosimetry parameters, calculations were made with three different voxel sizes. The effects were investigated with dose distribution calculations for seven voxel sizes: 1 × 1 × 0.1 cm3, 1 × 1 × 0.5 cm3, and 1 × 1 × 0.8 cm3. The 1 × 109 histories were simulated in order to get statistical uncertainties of 2%. This simulation takes about 9-10 hours to complete. Measurements are made with field sizes 10 × 10 cm2 for the 6 MV photon beams with Gaussian intensity distribution FWHM 0.1 cm and SSD 100.1 cm. MC simulated and measured dose distributions in a water phantom. The output of this simulation i.e. the percent depth dose and dose profile in dmax from the three sets of calculations are presented and comparisons are made with the experiment data from TTSH (Tan Tock Seng Hospital, Singapore) in 0-5 cm depth. Dose that scored in voxels is a volume averaged estimate of the dose at the center of a voxel. The results in this study show that the difference between Monte Carlo simulation and experiment data depend on the voxel size both for percent depth dose (PDD) and profile dose. PDD scan on Z axis (depth) of water phantom, the big difference obtain in the voxel size 1 × 1 × 0.8 cm3 about 17%. In this study, the profile dose focused on high gradient dose area. Profile dose scan on Y axis and the big difference get in the voxel size 1 × 1 × 0.1 cm3 about 12%. This study demonstrated that the arrange voxel in Monte Carlo simulation becomes important.

  12. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  13. Competitive Adsorption and Ordered Packing of Counterions near Highly Charged Surfaces: From Mean-Field Theory to Monte Carlo Simulations

    PubMed Central

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2013-01-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474

  14. Competitive adsorption and ordered packing of counterions near highly charged surfaces: From mean-field theory to Monte Carlo simulations.

    PubMed

    Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo

    2012-04-01

    Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.

  15. SU-F-T-156: Monte Carlo Simulation Using TOPAS for Synchrotron Based Proton Discrete Spot Scanning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moskvin, V; Pirlepesov, F; Tsiamas, P

    Purpose: This study provides an overview of the design and commissioning of the Monte Carlo (MC) model of the spot-scanning proton therapy nozzle and its implementation for the patient plan simulation. Methods: The Hitachi PROBEAT V scanning nozzle was simulated based on vendor specifications using the TOPAS extension of Geant4 code. FLUKA MC simulation was also utilized to provide supporting data for the main simulation. Validation of the MC model was performed using vendor provided data and measurements collected during acceptance/commissioning of the proton therapy machine. Actual patient plans using CT based treatment geometry were simulated and compared to themore » dose distributions produced by the treatment planning system (Varian Eclipse 13.6), and patient quality assurance measurements. In-house MATLAB scripts are used for converting DICOM data into TOPAS input files. Results: Comparison analysis of integrated depth doses (IDDs), therapeutic ranges (R90), and spot shape/sizes at different distances from the isocenter, indicate good agreement between MC and measurements. R90 agreement is within 0.15 mm across all energy tunes. IDDs and spot shapes/sizes differences are within statistical error of simulation (less than 1.5%). The MC simulated data, validated with physical measurements, were used for the commissioning of the treatment planning system. Patient geometry simulations were conducted based on the Eclipse produced DICOM plans. Conclusion: The treatment nozzle and standard option beam model were implemented in the TOPAS framework to simulate a highly conformal discrete spot-scanning proton beam system.« less

  16. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  17. Latent uncertainties of the precalculated track Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less

  18. Molecular simulations of Crussard curves of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-06-01

    The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.

  19. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  20. Poster — Thur Eve — 46: Monte Carlo model of the Novalis Classic 6MV stereotactic linear accelerator using the GATE simulation platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiebe, J; Department of Physics and Astronomy, University of Calgary, Calgary, AB; Ploquin, N

    2014-08-15

    Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4more » (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.« less

  1. Interfacing MCNPX and McStas for simulation of neutron transport

    NASA Astrophysics Data System (ADS)

    Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.

    2013-02-01

    Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.

  2. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE PAGES

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    2016-10-07

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  3. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  4. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  5. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  6. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  7. Universal aspects of conformations and transverse fluctuations of a two-dimensional semi-flexible chain

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Huang, Aiqun; Bhattacharya, Aniket; Binder, Kurt

    2015-03-01

    In this talk we compare the results obtained from Monte Carlo (MC) and Brownian dynamics (BD) simulation for the universal properties of a semi-flexible chain. Specifically we compare MC results obtained using pruned-enriched Rosenbluth method (PERM) with those obtained from BD simulation. We find that the scaled plot of root-mean-square (RMS) end-to-end distance / 2 Llp and RMS transverse transverse fluctuations √{ } /lp as a function of L /lp (where L and lp are the contour length, and the persistence length respectively) are universal and independent of the definition of the persistence length used in MC and BD schemes. We further investigate to what extent these results agree for a semi-flexible polymer confined in a quasi one dimensional channel.

  8. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    NASA Astrophysics Data System (ADS)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  9. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  10. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  11. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  12. Split exponential track length estimator for Monte-Carlo simulations of small-animal radiation therapy

    NASA Astrophysics Data System (ADS)

    Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.

    2014-12-01

    We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.

  13. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  14. Efficient hybrid non-equilibrium molecular dynamics--Monte Carlo simulations with symmetric momentum reversal.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2014-09-21

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  15. Efficient hybrid non-equilibrium molecular dynamics - Monte Carlo simulations with symmetric momentum reversal

    NASA Astrophysics Data System (ADS)

    Chen, Yunjie; Roux, Benoît

    2014-09-01

    Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

  16. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  17. Kinetic Monte Carlo Simulation of Oxygen and Cation Diffusion in Yttria-Stabilized Zirconia

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2011-01-01

    Yttria-stabilized zirconia (YSZ) is of interest to the aerospace community, notably for its application as a thermal barrier coating for turbine engine components. In such an application, diffusion of both oxygen ions and cations is of concern. Oxygen diffusion can lead to deterioration of a coated part, and often necessitates an environmental barrier coating. Cation diffusion in YSZ is much slower than oxygen diffusion. However, such diffusion is a mechanism by which creep takes place, potentially affecting the mechanical integrity and phase stability of the coating. In other applications, the high oxygen diffusivity of YSZ is useful, and makes the material of interest for use as a solid-state electrolyte in fuel cells. The kinetic Monte Carlo (kMC) method offers a number of advantages compared with the more widely known molecular dynamics simulation method. In particular, kMC is much more efficient for the study of processes, such as diffusion, that involve infrequent events. We describe the results of kinetic Monte Carlo computer simulations of oxygen and cation diffusion in YSZ. Using diffusive energy barriers from ab initio calculations and from the literature, we present results on the temperature dependence of oxygen and cation diffusivity, and on the dependence of the diffusivities on yttria concentration and oxygen sublattice vacancy concentration. We also present results of the effect on diffusivity of oxygen vacancies in the vicinity of the barrier cations that determine the oxygen diffusion energy barriers.

  18. SU-F-T-377: Monte Carlo Re-Evaluation of Volumetric-Modulated Arc Plans of Advanced Stage Nasopharygeal Cancers Optimized with Convolution-Superposition Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, K; Leung, R; Law, G

    Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam datasetmore » as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly with MC dose engine.« less

  19. A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14

    NASA Astrophysics Data System (ADS)

    Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.

    2000-03-01

    Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.

  20. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics — Monte Carlo Canonical Propagation Algorithm

    PubMed Central

    Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît

    2016-01-01

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  1. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  2. Combining Monte Carlo methods with coherent wave optics for the simulation of phase-sensitive X-ray imaging

    PubMed Central

    Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

    2014-01-01

    Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652

  3. Latent uncertainties of the precalculated track Monte Carlo method.

    PubMed

    Renaud, Marc-André; Roberge, David; Seuntjens, Jan

    2015-01-01

    While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.

  4. Validation of a track repeating algorithm for intensity modulated proton therapy: clinical cases study

    NASA Astrophysics Data System (ADS)

    Yepes, Pablo P.; Eley, John G.; Liu, Amy; Mirkovic, Dragan; Randeniya, Sharmalee; Titt, Uwe; Mohan, Radhe

    2016-04-01

    Monte Carlo (MC) methods are acknowledged as the most accurate technique to calculate dose distributions. However, due its lengthy calculation times, they are difficult to utilize in the clinic or for large retrospective studies. Track-repeating algorithms, based on MC-generated particle track data in water, accelerate dose calculations substantially, while essentially preserving the accuracy of MC. In this study, we present the validation of an efficient dose calculation algorithm for intensity modulated proton therapy, the fast dose calculator (FDC), based on a track-repeating technique. We validated the FDC algorithm for 23 patients, which included 7 brain, 6 head-and-neck, 5 lung, 1 spine, 1 pelvis and 3 prostate cases. For validation, we compared FDC-generated dose distributions with those from a full-fledged Monte Carlo based on GEANT4 (G4). We compared dose-volume-histograms, 3D-gamma-indices and analyzed a series of dosimetric indices. More than 99% of the voxels in the voxelized phantoms describing the patients have a gamma-index smaller than unity for the 2%/2 mm criteria. In addition the difference relative to the prescribed dose between the dosimetric indices calculated with FDC and G4 is less than 1%. FDC reduces the calculation times from 5 ms per proton to around 5 μs.

  5. Optimization of beam shaping assembly based on D-T neutron generator and dose evaluation for BNCT

    NASA Astrophysics Data System (ADS)

    Naeem, Hamza; Chen, Chaobin; Zheng, Huaqing; Song, Jing

    2017-04-01

    The feasibility of developing an epithermal neutron beam for a boron neutron capture therapy (BNCT) facility based on a high intensity D-T fusion neutron generator (HINEG) and using the Monte Carlo code SuperMC (Super Monte Carlo simulation program for nuclear and radiation process) is proposed in this study. The Monte Carlo code SuperMC is used to determine and optimize the final configuration of the beam shaping assembly (BSA). The optimal BSA design in a cylindrical geometry which consists of a natural uranium sphere (14 cm) as a neutron multiplier, AlF3 and TiF3 as moderators (20 cm each), Cd (1 mm) as a thermal neutron filter, Bi (5 cm) as a gamma shield, and Pb as a reflector and collimator to guide neutrons towards the exit window. The epithermal neutron beam flux of the proposed model is 5.73 × 109 n/cm2s, and other dosimetric parameters for the BNCT reported by IAEA-TECDOC-1223 have been verified. The phantom dose analysis shows that the designed BSA is accurate, efficient and suitable for BNCT applications. Thus, the Monte Carlo code SuperMC is concluded to be capable of simulating the BSA and the dose calculation for BNCT, and high epithermal flux can be achieved using proposed BSA.

  6. TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Bai, T

    2014-06-15

    Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections;more » 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research in Strategic Emerging Industry, Guangdong, China (2011A081402003)« less

  7. Simulation of temperature distribution in tumor Photothermal treatment

    NASA Astrophysics Data System (ADS)

    Zhang, Xiyang; Qiu, Shaoping; Wu, Shulian; Li, Zhifang; Li, Hui

    2018-02-01

    The light transmission in biological tissue and the optical properties of biological tissue are important research contents of biomedical photonics. It is of great theoretical and practical significance in medical diagnosis and light therapy of disease. In this paper, the temperature feedback-controller was presented for monitoring photothermal treatment in realtime. Two-dimensional Monte Carlo (MC) and diffuse approximation were compared and analyzed. The results demonstrated that diffuse approximation using extrapolated boundary conditions by finite element method is a good approximation to MC simulation. Then in order to minimize thermal damage, real-time temperature monitoring was appraised by proportional-integral-differential (PID) controller in the process of photothermal treatment.

  8. SU-F-T-157: Physics Considerations Regarding Dosimetric Accuracy of Analytical Dose Calculations for Small Field Proton Therapy: A Monte Carlo Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, C; Nanjing University of Aeronautics and Astronautics, Nanjing; Daartz, J

    Purpose: To evaluate the accuracy of dose calculations by analytical dose calculation methods (ADC) for small field proton therapy in a gantry based passive scattering facility. Methods: 50 patients with intra-cranial disease were evaluated in the study. Treatment plans followed standard prescription and optimization procedures of proton stereotactic radiosurgery. Dose distributions calculated with the Monte Carlo (MC) toolkit TOPAS were used to represent delivered treatments. The MC dose was first adjusted using the output factor (OF) applied clinically. This factor is determined from the field size and the prescribed range. We then introduced a normalization factor to measure the differencemore » in mean dose between the delivered dose (MC dose with OF) and the dose calculated by ADC for each beam. The normalization was determined by the mean dose of the center voxels of the target area. We compared delivered dose distributions and those calculated by ADC in terms of dose volume histogram parameters and beam range distributions. Results: The mean target dose for a whole treatment is generally within 5% comparing delivered dose (MC dose with OF) and ADC dose. However, the differences can be as great as 11% for shallow and small target treated with a thick range compensator. Applying the normalization factor to the MC dose with OF can reduce the mean dose difference to less than 3%. Considering range uncertainties, the generally applied margins (3.5% of the prescribed range + 1mm) to cover uncertainties in range might not be sufficient to guarantee tumor coverage. The range difference for R90 (90% distal dose falloff) is affected by multiple factors, such as the heterogeneity index. Conclusion: This study indicates insufficient accuracy calculating proton doses using ADC. Our results suggest that uncertainties of target doses are reduced using MC techniques, improving the dosimetric accuracy for proton stereotactic radiosurgery. The work was supported by NIH/NCI under CA U19 021239. CG was partially supported by the Chinese Scholarship Council (CSC) and the National Natural Science Foundation of China (Grant No. 11475087).« less

  9. Quantitative analysis of optical properties of flowing blood using a photon-cell interactive Monte Carlo code: effects of red blood cells' orientation on light scattering.

    PubMed

    Sakota, Daisuke; Takatani, Setsuo

    2012-05-01

    Optical properties of flowing blood were analyzed using a photon-cell interactive Monte Carlo (pciMC) model with the physical properties of the flowing red blood cells (RBCs) such as cell size, shape, refractive index, distribution, and orientation as the parameters. The scattering of light by flowing blood at the He-Ne laser wavelength of 632.8 nm was significantly affected by the shear rate. The light was scattered more in the direction of flow as the flow rate increased. Therefore, the light intensity transmitted forward in the direction perpendicular to flow axis decreased. The pciMC model can duplicate the changes in the photon propagation due to moving RBCs with various orientations. The resulting RBC's orientation that best simulated the experimental results was with their long axis perpendicular to the direction of blood flow. Moreover, the scattering probability was dependent on the orientation of the RBCs. Finally, the pciMC code was used to predict the hematocrit of flowing blood with accuracy of approximately 1.0 HCT%. The photon-cell interactive Monte Carlo (pciMC) model can provide optical properties of flowing blood and will facilitate the development of the non-invasive monitoring of blood in extra corporeal circulatory systems.

  10. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  11. Monte Carlo simulation of photon migration in a cloud computing environment with MapReduce

    PubMed Central

    Pratx, Guillem; Xing, Lei

    2011-01-01

    Monte Carlo simulation is considered the most reliable method for modeling photon migration in heterogeneous media. However, its widespread use is hindered by the high computational cost. The purpose of this work is to report on our implementation of a simple MapReduce method for performing fault-tolerant Monte Carlo computations in a massively-parallel cloud computing environment. We ported the MC321 Monte Carlo package to Hadoop, an open-source MapReduce framework. In this implementation, Map tasks compute photon histories in parallel while a Reduce task scores photon absorption. The distributed implementation was evaluated on a commercial compute cloud. The simulation time was found to be linearly dependent on the number of photons and inversely proportional to the number of nodes. For a cluster size of 240 nodes, the simulation of 100 billion photon histories took 22 min, a 1258 × speed-up compared to the single-threaded Monte Carlo program. The overall computational throughput was 85,178 photon histories per node per second, with a latency of 100 s. The distributed simulation produced the same output as the original implementation and was resilient to hardware failure: the correctness of the simulation was unaffected by the shutdown of 50% of the nodes. PMID:22191916

  12. Verification of recursive probabilistic integration (RPI) method for fatigue life management using non-destructive inspections

    NASA Astrophysics Data System (ADS)

    Chen, Tzikang J.; Shiao, Michael

    2016-04-01

    This paper verified a generic and efficient assessment concept for probabilistic fatigue life management. The concept is developed based on an integration of damage tolerance methodology, simulations methods1, 2, and a probabilistic algorithm RPI (recursive probability integration)3-9 considering maintenance for damage tolerance and risk-based fatigue life management. RPI is an efficient semi-analytical probabilistic method for risk assessment subjected to various uncertainties such as the variability in material properties including crack growth rate, initial flaw size, repair quality, random process modeling of flight loads for failure analysis, and inspection reliability represented by probability of detection (POD). In addition, unlike traditional Monte Carlo simulations (MCS) which requires a rerun of MCS when maintenance plan is changed, RPI can repeatedly use a small set of baseline random crack growth histories excluding maintenance related parameters from a single MCS for various maintenance plans. In order to fully appreciate the RPI method, a verification procedure was performed. In this study, MC simulations in the orders of several hundred billions were conducted for various flight conditions, material properties, and inspection scheduling, POD and repair/replacement strategies. Since the MC simulations are time-consuming methods, the simulations were conducted parallelly on DoD High Performance Computers (HPC) using a specialized random number generator for parallel computing. The study has shown that RPI method is several orders of magnitude more efficient than traditional Monte Carlo simulations.

  13. Enabling Microscopic Simulators to Perform System Level Tasks: A System-Identification Based, Closure-on-Demand Toolkit for Multiscale Simulation Stability/Bifurcation Analysis, Optimization and Control

    DTIC Science & Technology

    2006-10-01

    The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W

  14. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  15. TH-A-19A-08: Intel Xeon Phi Implementation of a Fast Multi-Purpose Monte Carlo Simulation for Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Lee, J; Sterpin, E

    2014-06-15

    Purpose: Recent studies have demonstrated the capability of graphics processing units (GPUs) to compute dose distributions using Monte Carlo (MC) methods within clinical time constraints. However, GPUs have a rigid vectorial architecture that favors the implementation of simplified particle transport algorithms, adapted to specific tasks. Our new, fast, and multipurpose MC code, named MCsquare, runs on Intel Xeon Phi coprocessors. This technology offers 60 independent cores, and therefore more flexibility to implement fast and yet generic MC functionalities, such as prompt gamma simulations. Methods: MCsquare implements several models and hence allows users to make their own tradeoff between speed andmore » accuracy. A 200 MeV proton beam is simulated in a heterogeneous phantom using Geant4 and two configurations of MCsquare. The first one is the most conservative and accurate. The method of fictitious interactions handles the interfaces and secondary charged particles emitted in nuclear interactions are fully simulated. The second, faster configuration simplifies interface crossings and simulates only secondary protons after nuclear interaction events. Integral depth-dose and transversal profiles are compared to those of Geant4. Moreover, the production profile of prompt gammas is compared to PENH results. Results: Integral depth dose and transversal profiles computed by MCsquare and Geant4 are within 3%. The production of secondaries from nuclear interactions is slightly inaccurate at interfaces for the fastest configuration of MCsquare but this is unlikely to have any clinical impact. The computation time varies between 90 seconds for the most conservative settings to merely 59 seconds in the fastest configuration. Finally prompt gamma profiles are also in very good agreement with PENH results. Conclusion: Our new, fast, and multi-purpose Monte Carlo code simulates prompt gammas and calculates dose distributions in less than a minute, which complies with clinical time constraints. It has been successfully validated with Geant4. This work has been financialy supported by InVivoIGT, a public/private partnership between UCL and IBA.« less

  16. SU-F-T-364: Monte Carlo-Dose Verification of Volumetric Modulated Arc Therapy Plans Using AAPM TG-119 Test Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onizuka, R; Araki, F; Ohno, T

    2016-06-15

    Purpose: To investigate the Monte Carlo (MC)-based dose verification for VMAT plans by a treatment planning system (TPS). Methods: The AAPM TG-119 test structure set was used for VMAT plans by the Pinnacle3 (convolution/superposition), using a Synergy radiation head of a 6 MV beam with the Agility MLC. The Synergy was simulated with the EGSnrc/BEAMnrc code, and VMAT dose distributions were calculated with the EGSnrc/DOSXYZnrc code by the same irradiation conditions as TPS. VMAT dose distributions of TPS and MC were compared with those of EBT3 film, by 2-D gamma analysis of ±3%/3 mm criteria with a threshold of 30%more » of prescribed doses. VMAT dose distributions between TPS and MC were also compared by DVHs and 3-D gamma analysis of ±3%/3 mm criteria with a threshold of 10%, and 3-D passing rates for PTVs and OARs were analyzed. Results: TPS dose distributions differed from those of film, especially for Head & neck. The dose difference between TPS and film results from calculation accuracy for complex motion of MLCs like tongue and groove effect. In contrast, MC dose distributions were in good agreement with those of film. This is because MC can model fully the MLC configuration and accurately reproduce the MLC motion between control points in VMAT plans. D95 of PTV for Prostate, Head & neck, C-shaped, and Multi Target was 97.2%, 98.1%, 101.6%, and 99.7% for TPS and 95.7%, 96.0%, 100.6%, and 99.1% for MC, respectively. Similarly, 3-D gamma passing rates of each PTV for TPS vs. MC were 100%, 89.5%, 99.7%, and 100%, respectively. 3-D passing rates of TPS reduced for complex VMAT fields like Head & neck because MLCs are not modeled completely for TPS. Conclusion: MC-calculated VMAT dose distributions is useful for the 3-D dose verification of VMAT plans by TPS.« less

  17. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  18. Data decomposition of Monte Carlo particle transport simulations via tally servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit

    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less

  19. A novel Monte Carlo algorithm for simulating crystals with McStas

    NASA Astrophysics Data System (ADS)

    Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.

    2004-07-01

    We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.

  20. Advanced proton beam dosimetry part II: Monte Carlo vs. pencil beam-based planning for lung cancer

    PubMed Central

    Maes, Dominic; Saini, Jatinder; Zeng, Jing; Rengan, Ramesh; Wong, Tony

    2018-01-01

    Background Proton pencil beam (PB) dose calculation algorithms have limited accuracy within heterogeneous tissues of lung cancer patients, which may be addressed by modern commercial Monte Carlo (MC) algorithms. We investigated clinical pencil beam scanning (PBS) dose differences between PB and MC-based treatment planning for lung cancer patients. Methods With IRB approval, a comparative dosimetric analysis between RayStation MC and PB dose engines was performed on ten patient plans. PBS gantry plans were generated using single-field optimization technique to maintain target coverage under range and setup uncertainties. Dose differences between PB-optimized (PBopt), MC-recalculated (MCrecalc), and MC-optimized (MCopt) plans were recorded for the following region-of-interest metrics: clinical target volume (CTV) V95, CTV homogeneity index (HI), total lung V20, total lung VRX (relative lung volume receiving prescribed dose or higher), and global maximum dose. The impact of PB-based and MC-based planning on robustness to systematic perturbation of range (±3% density) and setup (±3 mm isotropic) was assessed. Pairwise differences in dose parameters were evaluated through non-parametric Friedman and Wilcoxon sign-rank testing. Results In this ten-patient sample, CTV V95 decreased significantly from 99–100% for PBopt to 77–94% for MCrecalc and recovered to 99–100% for MCopt (P<10−5). The median CTV HI (D95/D5) decreased from 0.98 for PBopt to 0.91 for MCrecalc and increased to 0.95 for MCopt (P<10−3). CTV D95 robustness to range and setup errors improved under MCopt (ΔD95 =−1%) compared to MCrecalc (ΔD95 =−6%, P=0.006). No changes in lung dosimetry were observed for large volumes receiving low to intermediate doses (e.g., V20), while differences between PB-based and MC-based planning were noted for small volumes receiving high doses (e.g., VRX). Global maximum patient dose increased from 106% for PBopt to 109% for MCrecalc and 112% for MCopt (P<10−3). Conclusions MC dosimetry revealed a reduction in target dose coverage under PB-based planning that was regained under MC-based planning along with improved plan robustness. MC-based optimization and dose calculation should be integrated into clinical planning workflows of lung cancer patients receiving actively scanned proton therapy. PMID:29876310

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, M; Lee, V; Leung, R

    Purpose: Investigating the relative sensitivity of Monte Carlo (MC) and Pencil Beam (PB) dose calculation algorithms to low-Z (titanium) metallic artifacts is important for accurate and consistent dose reporting in post¬operative spinal RS. Methods: Sensitivity analysis of MC and PB dose calculation algorithms on the Monaco v.3.3 treatment planning system (Elekta CMS, Maryland Heights, MO, USA) was performed using CT images reconstructed without (plain) and with Orthopedic Metal Artifact Reduction (OMAR; Philips Healthcare system, Cleveland, OH, USA). 6MV and 10MV volumetric-modulated arc (VMAT) RS plans were obtained for MC and PB on the plain and OMAR images (MC-plain/OMAR and PB-plain/OMAR).more » Results: Maximum differences in dose to 0.2cc (D0.2cc) of spinal cord and cord +2mm for 6MV and 10MV VMAT plans were 0.1Gy between MC-OMAR and MC-plain, and between PB-OMAR and PB-plain. Planning target volume (PTV) dose coverage changed by 0.1±0.7% and 0.2±0.3% for 6MV and 10MV from MC-OMAR to MC-plain, and by 0.1±0.1% for both 6MV and 10 MV from PB-OMAR to PB-plain, respectively. In no case for both MC and PB the D0.2cc to spinal cord was found to exceed the planned tolerance changing from OMAR to plain CT in dose calculations. Conclusion: Dosimetric impacts of metallic artifacts caused by low-Z metallic spinal hardware (mainly titanium alloy) are not clinically important in VMAT-based spine RS, without significant dependence on dose calculation methods (MC and PB) and photon energy ≥ 6MV. There is no need to use one algorithm instead of the other to reduce uncertainty for dose reporting. The dose calculation method that should be used in spine RS shall be consistent with the usual clinical practice.« less

  2. SU-F-BRD-07: Fast Monte Carlo-Based Biological Optimization of Proton Therapy Treatment Plans for Thyroid Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Chan Tseung, H; Ma, J; Ma, D

    2015-06-15

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based biological planning for the treatment of thyroid tumors in spot-scanning proton therapy. Methods: Recently, we developed a fast and accurate GPU-based MC simulation of proton transport that was benchmarked against Geant4.9.6 and used as the dose calculation engine in a clinically-applicable GPU-accelerated IMPT optimizer. Besides dose, it can simultaneously score the dose-averaged LET (LETd), which makes fast biological dose (BD) estimates possible. To convert from LETd to BD, we used a linear relation based on cellular irradiation data. Given a thyroid patient with a 93cc tumor volume, we createdmore » a 2-field IMPT plan in Eclipse (Varian Medical Systems). This plan was re-calculated with our MC to obtain the BD distribution. A second 5-field plan was made with our in-house optimizer, using pre-generated MC dose and LETd maps. Constraints were placed to maintain the target dose to within 25% of the prescription, while maximizing the BD. The plan optimization and calculation of dose and LETd maps were performed on a GPU cluster. The conventional IMPT and biologically-optimized plans were compared. Results: The mean target physical and biological doses from our biologically-optimized plan were, respectively, 5% and 14% higher than those from the MC re-calculation of the IMPT plan. Dose sparing to critical structures in our plan was also improved. The biological optimization, including the initial dose and LETd map calculations, can be completed in a clinically viable time (∼30 minutes) on a cluster of 25 GPUs. Conclusion: Taking advantage of GPU acceleration, we created a MC-based, biologically optimized treatment plan for a thyroid patient. Compared to a standard IMPT plan, a 5% increase in the target’s physical dose resulted in ∼3 times as much increase in the BD. Biological planning was thus effective in escalating the target BD.« less

  3. SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y

    Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less

  4. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  5. SU-F-T-74: Experimental Validation of Monaco Electron Monte Carlo Dose Calculation for Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varadhan; Way, S; Arentsen, L

    2016-06-15

    Purpose: To verify experimentally the accuracy of Monaco (Elekta) electron Monte Carlo (eMC) algorithm to calculate small field size depth doses, monitor units and isodose distributions. Methods: Beam modeling of eMC algorithm was performed for electron energies of 6, 9, 12 15 and 18 Mev for a Elekta Infinity Linac and all available ( 6, 10, 14 20 and 25 cone) applicator sizes. Electron cutouts of incrementally smaller field sizes (20, 40, 60 and 80% blocked from open cone) were fabricated. Dose calculation was performed using a grid size smaller than one-tenth of the R{sub 80–20} electron distal falloff distancemore » and number of particle histories was set at 500,000 per cm{sup 2}. Percent depth dose scans and beam profiles at dmax, d{sub 90} and d{sub 80} depths were measured for each cutout and energy with Wellhoffer (IBA) Blue Phantom{sup 2} scanning system and compared against eMC calculated doses. Results: The measured dose and output factors of incrementally reduced cutout sizes (to 3cm diameter) agreed with eMC calculated doses within ± 2.5%. The profile comparisons at dmax, d{sub 90} and d{sub 80} depths and percent depth doses at reduced field sizes agreed within 2.5% or 2mm. Conclusion: Our results indicate that the Monaco eMC algorithm can accurately predict depth doses, isodose distributions, and monitor units in homogeneous water phantom for field sizes as small as 3.0 cm diameter for energies in the 6 to 18 MeV range at 100 cm SSD. Consequently, the old rule of thumb to approximate limiting cutout size for an electron field determined by the lateral scatter equilibrium (E (MeV)/2.5 in centimeters of water) does not apply to Monaco eMC algorithm.« less

  6. Reactive Monte Carlo sampling with an ab initio potential

    NASA Astrophysics Data System (ADS)

    Leiding, Jeff; Coe, Joshua D.

    2016-05-01

    We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.

  7. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  8. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  10. A stable 1D multigroup high-order low-order method

    DOE PAGES

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot; ...

    2016-07-13

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  11. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  12. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study.

    PubMed

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-21

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.

  13. Verification measurements and clinical evaluation of the iPlan RT Monte Carlo dose algorithm for 6 MV photon energy

    NASA Astrophysics Data System (ADS)

    Petoukhova, A. L.; van Wingerden, K.; Wiggenraad, R. G. J.; van de Vaart, P. J. M.; van Egmond, J.; Franken, E. M.; van Santvoort, J. P. C.

    2010-08-01

    This study presents data for verification of the iPlan RT Monte Carlo (MC) dose algorithm (BrainLAB, Feldkirchen, Germany). MC calculations were compared with pencil beam (PB) calculations and verification measurements in phantoms with lung-equivalent material, air cavities or bone-equivalent material to mimic head and neck and thorax and in an Alderson anthropomorphic phantom. Dosimetric accuracy of MC for the micro-multileaf collimator (MLC) simulation was tested in a homogeneous phantom. All measurements were performed using an ionization chamber and Kodak EDR2 films with Novalis 6 MV photon beams. Dose distributions measured with film and calculated with MC in the homogeneous phantom are in excellent agreement for oval, C and squiggle-shaped fields and for a clinical IMRT plan. For a field with completely closed MLC, MC is much closer to the experimental result than the PB calculations. For fields larger than the dimensions of the inhomogeneities the MC calculations show excellent agreement (within 3%/1 mm) with the experimental data. MC calculations in the anthropomorphic phantom show good agreement with measurements for conformal beam plans and reasonable agreement for dynamic conformal arc and IMRT plans. For 6 head and neck and 15 lung patients a comparison of the MC plan with the PB plan was performed. Our results demonstrate that MC is able to accurately predict the dose in the presence of inhomogeneities typical for head and neck and thorax regions with reasonable calculation times (5-20 min). Lateral electron transport was well reproduced in MC calculations. We are planning to implement MC calculations for head and neck and lung cancer patients.

  14. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  15. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  16. Bayesian inversion using a geologically realistic and discrete model space

    NASA Astrophysics Data System (ADS)

    Jaeggli, C.; Julien, S.; Renard, P.

    2017-12-01

    Since the early days of groundwater modeling, inverse methods play a crucial role. Many research and engineering groups aim to infer extensive knowledge of aquifer parameters from a sparse set of observations. Despite decades of dedicated research on this topic, there are still several major issues to be solved. In the hydrogeological framework, one is often confronted with underground structures that present very sharp contrasts of geophysical properties. In particular, subsoil structures such as karst conduits, channels, faults, or lenses, strongly influence groundwater flow and transport behavior of the underground. For this reason it can be essential to identify their location and shape very precisely. Unfortunately, when inverse methods are specially trained to consider such complex features, their computation effort often becomes unaffordably high. The following work is an attempt to solve this dilemma. We present a new method that is, in some sense, a compromise between the ergodicity of Markov chain Monte Carlo (McMC) methods and the efficient handling of data by the ensemble based Kalmann filters. The realistic and complex random fields are generated by a Multiple-Point Statistics (MPS) tool. Nonetheless, it is applicable with any conditional geostatistical simulation tool. Furthermore, the algorithm is independent of any parametrization what becomes most important when two parametric systems are equivalent (permeability and resistivity, speed and slowness, etc.). When compared to two existing McMC schemes, the computational effort was divided by a factor of 12.

  17. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  18. SU-F-19A-05: Experimental and Monte Carlo Characterization of the 1 Cm CivaString 103Pd Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J; Micka, J; Culberson, W

    Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less

  19. A virtual source model for Monte Carlo simulation of helical tomotherapy.

    PubMed

    Yuan, Jiankui; Rong, Yi; Chen, Quan

    2015-01-08

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase-space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS-generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of < 1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of < 2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM-based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose-volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM-based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media.

  20. A virtual source model for Monte Carlo simulation of helical tomotherapy

    PubMed Central

    Yuan, Jiankui; Rong, Yi

    2015-01-01

    The purpose of this study was to present a Monte Carlo (MC) simulation method based on a virtual source, jaw, and MLC model to calculate dose in patient for helical tomotherapy without the need of calculating phase‐space files (PSFs). Current studies on the tomotherapy MC simulation adopt a full MC model, which includes extensive modeling of radiation source, primary and secondary jaws, and multileaf collimator (MLC). In the full MC model, PSFs need to be created at different scoring planes to facilitate the patient dose calculations. In the present work, the virtual source model (VSM) we established was based on the gold standard beam data of a tomotherapy unit, which can be exported from the treatment planning station (TPS). The TPS‐generated sinograms were extracted from the archived patient XML (eXtensible Markup Language) files. The fluence map for the MC sampling was created by incorporating the percentage leaf open time (LOT) with leaf filter, jaw penumbra, and leaf latency contained from sinogram files. The VSM was validated for various geometry setups and clinical situations involving heterogeneous media and delivery quality assurance (DQA) cases. An agreement of <1% was obtained between the measured and simulated results for percent depth doses (PDDs) and open beam profiles for all three jaw settings in the VSM commissioning. The accuracy of the VSM leaf filter model was verified in comparing the measured and simulated results for a Picket Fence pattern. An agreement of <2% was achieved between the presented VSM and a published full MC model for heterogeneous phantoms. For complex clinical head and neck (HN) cases, the VSM‐based MC simulation of DQA plans agreed with the film measurement with 98% of planar dose pixels passing on the 2%/2 mm gamma criteria. For patient treatment plans, results showed comparable dose‐volume histograms (DVHs) for planning target volumes (PTVs) and organs at risk (OARs). Deviations observed in this study were consistent with literature. The VSM‐based MC simulation approach can be feasibly built from the gold standard beam model of a tomotherapy unit. The accuracy of the VSM was validated against measurements in homogeneous media, as well as published full MC model in heterogeneous media. PACS numbers: 87.53.‐j, 87.55.K‐ PMID:25679157

  1. Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test

    NASA Astrophysics Data System (ADS)

    Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.

    We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.

  2. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  3. Newly developed photon-cell interactive Monte Carlo (pciMC) simulation for non-invasive and continuous diagnosis of blood during extracorporeal circulation support

    NASA Astrophysics Data System (ADS)

    Sakota, Daisuke; Takatani, Setsuo

    2011-07-01

    We have sought for non-invasive diagnosis of blood during the extracorporeal circulation support. To achieve the goal, we have newly developed a photon-cell interactive Monte Carlo (pciMC) model for optical propagation through blood. The pciMC actually describes the interaction of photons with 3-dimentional biconcave RBCs. The scattering is described by micro-scopical RBC boundary condition based on geometric optics. By using pciMC, we modeled the RBCs inside the extracorporeal circuit will be oriented by the blood flow. The RBCs' orientation was defined as their long axis being directed to the center of the circulation tube. Simultaneously the RBCs were allowed to randomly rotate about the long axis direction. As a result, as flow rate increased, the orientation rate increased and converged to approximately 22% at 0.5 L/min flow rate and above. And finally, by using this model, the pciMC non-invasively and absolutely predicted Hct and hemoglobin with the accuracies of 0.84+/-0.82 [HCT%] and 0.42+/-0.28 [g/dL] respectively against measurements by a blood gas analyzer.

  4. A Detailed FLUKA-2005 Monte Carlo Simulation for the ATIC Detector

    NASA Technical Reports Server (NTRS)

    Gunasingha, R. M.; Fazely, A. R.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T. G.

    2006-01-01

    We have performed a detailed Monte Carlo (MC) calculation for the Advanced thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2005 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon Bight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate @GO) calorimeter. It is equipped with a large mosaic of silicon detector pixels capable of charge identification and as a particle tracking system, three projective layers of x-y scintillator hodoscopes were employed, above, in the middle and below a 0.75 nuclear interaction length graphite target. Our calculations are part of an analysis package of both A- and energy-dependences of different nuclei interacting with the ATIC detector. The MC simulates the responses of different components of the detector such as the Simatrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We also show comparisons of the FLUKA-2005 MC calculations with a GEANT calculation and data for protons, He and CNO.

  5. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  6. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  8. Atomic mean-square displacement of a solid: A Green's-function approach

    NASA Astrophysics Data System (ADS)

    Shukla, R. C.; Hübschle, Hermann

    1989-07-01

    We have presented a Green's-function method of calculating the atomic mean-square displacement (MSD) of a solid. The method effectively sums a class of all anharmonic contributions to the MSD. From the point of view of perturbation theory (PT) our expression for MSD includes the lowest-order (λ2) PT contributions (cubic and quartic) with correct numerical coefficients. The numerical results obtained by this method in the high-temperature limit for a fcc nearest-neighbor Lennard-Jones-solid model are in excellent agreement with the Monte Carlo (MC) method for the same model over the entire temperature range of the solid. Highly accurate results for the order-λ2 PT contributions to MSD are obtained by eliminating the uncertainty in the convergence of the cubic contributions in the earlier work of Heiser, Shukla, and Cowly and they are now in much better agreement with the MC results but still inferior to the Green's-function method at the highest temperature.

  9. SU-F-T-371: Development of a Linac Monte Carlo Model to Calculate Surface Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, S; Yan, Y; Gifford, K

    2016-06-15

    Purpose: To generate and validate a linac Monte Carlo (MC) model for surface dose prediction. Methods: BEAMnrc V4-2.4.0 was used to model 6 and 18 MV photon beams for a commercially available linac. DOSXYZnrc V4-2.4.0 calculated 3D dose distributions in water. Percent depth dose (PDD) and beam profiles were extracted for comparison to measured data. Surface dose and at depths in the buildup region was measured with radiochromic film at 100 cm SSD for 4 × 4 cm{sup 2} and 10 × 10 cm{sup 2} collimator settings for open and MLC collimated fields. For the 6 MV beam, films weremore » placed at depths ranging from 0.015 cm to 2 cm and for 18 MV, 0.015 cm to 3.5 cm in Solid Water™. Films were calibrated for both photon energies at their respective dmax. PDDs and profiles were extracted from the film and compared to the MC data. The MC model was adjusted to match measured PDD and profiles. Results: For the 6 MV beam, the mean error(ME) in PDD between film and MC for open fields was 1.9%, whereas it was 2.4% for MLC. For the 18 MV beam, the ME in PDD for open fields was 2% and was 3.5% for MLC. For the 6 MV beam, the average root mean square(RMS) deviation for the central 80% of the beam profile for open fields was 1.5%, whereas it was 1.6% for MLC. For the 18 MV beam, the maximum RMS for open fields was 3%, and was 3.1% for MLC. Conclusion: The MC model of a linac agreed to within 4% of film measurements for depths ranging from the surface to dmax. Therefore, the MC linac model can predict surface dose for clinical applications. Future work will focus on adjusting the linac MC model to reduce RMS error and improve accuracy.« less

  10. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  11. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  12. SU-F-T-610: Comparison of Output Factors for Small Radiation Fields Used in SBRT Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, R; Eldib, A; Li, J

    2016-06-15

    Purpose: In order to fundamentally understand our previous dose verification results between measurements and calculations from treatment planning system (TPS) for SBRT plans for different sized targets, the goal of the present work was to compare output factors for small fields measured using EDR2 films with TPS and Monet Carlo (MC) simulations. Methods: 6MV beam was delivered to EDR2 films for each of the following field sizes; 1×1 cm{sup 2}, 1.5×1.5 cm{sup 2}, 2×2 cm{sup 2}, 3×3 cm{sup 2}, 4×4 cm{sup 2}, 5×5 cm{sup 2} and 10×10 cm{sup 2}. The films were developed in a film processer, then scanned withmore » a Vidar VXR-16 scanner and analyzed using RIT113 version 6.1. A standard calibration curve was obtained with the 6MV beam and was used to get absolute dose for measured field sizes. Similar plans for all fields sizes mentioned above were generated using Eclipse with the Analytical Anisotropic Algorithm. Similarly, MC simulations were carried out using the MCSIM, an in-house MC code for different field sizes. Output factors normalized to 10×10 cm{sup 2} reference field were calculated for different field sizes in all the three cases and compared. Results: For field sizes ranging from 1×1 cm{sup 2} to 2×2 cm{sup 2}, the differences in output factors between measurements (films), TPS and MC simulations were within 0.22%. For field sizes ranging from 3×3cm{sup 2} to 5×5cm{sup 2}, differences in output factors were within 0.10%. Conclusion: No clinically significant difference was obtained in output factors for different field sizes acquired from films, TPS and MC simulations. Our results showed that the output factors are predicted accurately from TPS when compared to the actual measurements and superior dose calculation Monte Carlo method. This study would help us in understanding our previously obtained dose verification results for small fields used in the SBRT treatment.« less

  13. SU-G-201-13: Investigation of Dose Variation Induced by HDR Ir-192 Source Global Shift Within the Varian Ring Applicator Using Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y; Cai, J; Meltsner, S

    2016-06-15

    Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less

  14. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  15. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  16. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    PubMed

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. MC EMiNEM maps the interaction landscape of the Mediator.

    PubMed

    Niederberger, Theresa; Etzold, Stefanie; Lidschreiber, Michael; Maier, Kerstin C; Martin, Dietmar E; Fröhlich, Holger; Cramer, Patrick; Tresch, Achim

    2012-01-01

    The Mediator is a highly conserved, large multiprotein complex that is involved essentially in the regulation of eukaryotic mRNA transcription. It acts as a general transcription factor by integrating regulatory signals from gene-specific activators or repressors to the RNA Polymerase II. The internal network of interactions between Mediator subunits that conveys these signals is largely unknown. Here, we introduce MC EMiNEM, a novel method for the retrieval of functional dependencies between proteins that have pleiotropic effects on mRNA transcription. MC EMiNEM is based on Nested Effects Models (NEMs), a class of probabilistic graphical models that extends the idea of hierarchical clustering. It combines mode-hopping Monte Carlo (MC) sampling with an Expectation-Maximization (EM) algorithm for NEMs to increase sensitivity compared to existing methods. A meta-analysis of four Mediator perturbation studies in Saccharomyces cerevisiae, three of which are unpublished, provides new insight into the Mediator signaling network. In addition to the known modular organization of the Mediator subunits, MC EMiNEM reveals a hierarchical ordering of its internal information flow, which is putatively transmitted through structural changes within the complex. We identify the N-terminus of Med7 as a peripheral entity, entailing only local structural changes upon perturbation, while the C-terminus of Med7 and Med19 appear to play a central role. MC EMiNEM associates Mediator subunits to most directly affected genes, which, in conjunction with gene set enrichment analysis, allows us to construct an interaction map of Mediator subunits and transcription factors.

  18. Multiscale modelling of precipitation in concentrated alloys: from atomistic Monte Carlo simulations to cluster dynamics I thermodynamics

    NASA Astrophysics Data System (ADS)

    Lépinoux, J.; Sigli, C.

    2018-01-01

    In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.

  19. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  20. Analysis of dense-medium light scattering with applications to corneal tissue: experiments and Monte Carlo simulations.

    PubMed

    Kim, K B; Shanyfelt, L M; Hahn, D W

    2006-01-01

    Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.

  1. Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation.

    PubMed

    Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna

    2018-04-28

    We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ϵ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].

  2. Deviation from equilibrium conditions in molecular dynamic simulations of homogeneous nucleation

    NASA Astrophysics Data System (ADS)

    Halonen, Roope; Zapadinsky, Evgeni; Vehkamäki, Hanna

    2018-04-01

    We present a comparison between Monte Carlo (MC) results for homogeneous vapour-liquid nucleation of Lennard-Jones clusters and previously published values from molecular dynamics (MD) simulations. Both the MC and MD methods sample real cluster configuration distributions. In the MD simulations, the extent of the temperature fluctuation is usually controlled with an artificial thermostat rather than with more realistic carrier gas. In this study, not only a primarily velocity scaling thermostat is considered, but also Nosé-Hoover, Berendsen, and stochastic Langevin thermostat methods are covered. The nucleation rates based on a kinetic scheme and the canonical MC calculation serve as a point of reference since they by definition describe an equilibrated system. The studied temperature range is from T = 0.3 to 0.65 ɛ/k. The kinetic scheme reproduces well the isothermal nucleation rates obtained by Wedekind et al. [J. Chem. Phys. 127, 064501 (2007)] using MD simulations with carrier gas. The nucleation rates obtained by artificially thermostatted MD simulations are consistently lower than the reference nucleation rates based on MC calculations. The discrepancy increases up to several orders of magnitude when the density of the nucleating vapour decreases. At low temperatures, the difference to the MC-based reference nucleation rates in some cases exceeds the maximal nonisothermal effect predicted by classical theory of Feder et al. [Adv. Phys. 15, 111 (1966)].

  3. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less

  4. Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi

    Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.

  5. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  6. The dose distribution of low dose rate Cs-137 in intracavitary brachytherapy: comparison of Monte Carlo simulation, treatment planning calculation and polymer gel measurement

    NASA Astrophysics Data System (ADS)

    Fragoso, M.; Love, P. A.; Verhaegen, F.; Nalder, C.; Bidmead, A. M.; Leach, M.; Webb, S.

    2004-12-01

    In this study, the dose distribution delivered by low dose rate Cs-137 brachytherapy sources was investigated using Monte Carlo (MC) techniques and polymer gel dosimetry. The results obtained were compared with a commercial treatment planning system (TPS). The 20 mm and the 30 mm diameter Selectron vaginal applicator set (Nucletron) were used for this study. A homogeneous and a heterogeneous—with an air cavity—polymer gel phantom was used to measure the dose distribution from these sources. The same geometrical set-up was used for the MC calculations. Beyond the applicator tip, differences in dose as large as 20% were found between the MC and TPS. This is attributed to the presence of stainless steel in the applicator and source set, which are not considered by the TPS calculations. Beyond the air cavity, differences in dose of around 5% were noted, due to the TPS assuming a homogeneous water medium. The polymer gel results were in good agreement with the MC calculations for all the cases investigated.

  7. Twomey Effect in Subtropical Stratocumulus Clouds from UV Depolarization LIDAR

    NASA Astrophysics Data System (ADS)

    de Graaf, Martin; Brown, Jessica; Donovan, David

    2018-04-01

    Marine stratocumulus clouds are important climate regulators, reflecting sunlight over a dark ocean background. A UV-depolarization lidar on Ascension, a small remote island in the south Atlantic, measured cloud droplet sizes and number concentration using an inversion method based on Monte Carlo (MC) modelling of multiple scattering in idealised semiadiabatic clouds. The droplet size and number concentration weremodulated due to smoke from the African continent, measured by the same instrument.

  8. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  9. Cellular dosimetry calculations for Strontium-90 using Monte Carlo code PENELOPE.

    PubMed

    Hocine, Nora; Farlay, Delphine; Boivin, Georges; Franck, Didier; Agarande, Michelle

    2014-11-01

    To improve risk assessments associated with chronic exposure to Strontium-90 (Sr-90), for both the environment and human health, it is necessary to know the energy distribution in specific cells or tissue. Monte Carlo (MC) simulation codes are extremely useful tools for calculating deposition energy. The present work was focused on the validation of the MC code PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and the assessment of dose distribution to bone marrow cells from punctual Sr-90 source localized within the cortical bone part. S-values (absorbed dose per unit cumulated activity) calculations using Monte Carlo simulations were performed by using PENELOPE and Monte Carlo N-Particle eXtended (MCNPX). Cytoplasm, nucleus, cell surface, mouse femur bone and Sr-90 radiation source were simulated. Cells are assumed to be spherical with the radii of the cell and cell nucleus ranging from 2-10 μm. The Sr-90 source is assumed to be uniformly distributed in cell nucleus, cytoplasm and cell surface. The comparison of S-values calculated with PENELOPE to MCNPX results and the Medical Internal Radiation Dose (MIRD) values agreed very well since the relative deviations were less than 4.5%. The dose distribution to mouse bone marrow cells showed that the cells localized near the cortical part received the maximum dose. The MC code PENELOPE may prove useful for cellular dosimetry involving radiation transport through materials other than water, or for complex distributions of radionuclides and geometries.

  10. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  11. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE PAGES

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk; ...

    2017-11-07

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  12. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    NASA Astrophysics Data System (ADS)

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; Dagotto, Elbio

    2015-06-01

    Lattice spin-fermion models are important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the "spins," are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The "traveling cluster approximation" (TCA) is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 103 sites. In this publication, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. This allows us to solve generic spin-fermion models easily on 104 lattice sites and with some effort on 105 lattice sites, representing the record lattice sizes studied for this family of models.

  13. Constant-pH Molecular Dynamics Simulations for Large Biomolecular Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radak, Brian K.; Chipot, Christophe; Suh, Donghyuk

    We report that an increasingly important endeavor is to develop computational strategies that enable molecular dynamics (MD) simulations of biomolecular systems with spontaneous changes in protonation states under conditions of constant pH. The present work describes our efforts to implement the powerful constant-pH MD simulation method, based on a hybrid nonequilibrium MD/Monte Carlo (neMD/MC) technique within the highly scalable program NAMD. The constant-pH hybrid neMD/MC method has several appealing features; it samples the correct semigrand canonical ensemble rigorously, the computational cost increases linearly with the number of titratable sites, and it is applicable to explicit solvent simulations. The present implementationmore » of the constant-pH hybrid neMD/MC in NAMD is designed to handle a wide range of biomolecular systems with no constraints on the choice of force field. Furthermore, the sampling efficiency can be adaptively improved on-the-fly by adjusting algorithmic parameters during the simulation. Finally, illustrative examples emphasizing medium- and large-scale applications on next-generation supercomputing architectures are provided.« less

  14. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  15. Numerical simulation studies for optical properties of biomaterials

    NASA Astrophysics Data System (ADS)

    Krasnikov, I.; Seteikin, A.

    2016-11-01

    Biophotonics involves understanding how light interacts with biological matter, from molecules and cells, to tissues and even whole organisms. Light can be used to probe biomolecular events, such as gene expression and protein-protein interaction, with impressively high sensitivity and specificity. The spatial and temporal distribution of biochemical constituents can also be visualized with light and, thus, the corresponding physiological dynamics in living cells, tissues, and organisms in real time. Computer-based Monte Carlo (MC) models of light transport in turbid media take a different approach. In this paper, the optical and structural properties of biomaterials discussed. We explain the numerical simulationmethod used for studying the optical properties of biomaterials. Applications of the Monte-Carlo method in photodynamic therapy, skin tissue optics, and bioimaging described.

  16. MC-TESTER v. 1.23: A universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    NASA Astrophysics Data System (ADS)

    Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.

    2011-03-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.

  17. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.

    2015-10-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.

  18. Poster - Thurs Eve-23: Effect of lung density and geometry variation on inhomogeneity correction algorithms: A Monte Carlo dosimetry evaluation.

    PubMed

    Chow, J; Leung, M; Van Dyk, J

    2008-07-01

    This study provides new information on the evaluation of the lung dose calculation algorithms as a function of the relative electron density of lung, ρ e,lung . Doses calculated using the collapsed cone convolution (CCC) and adaptive convolution (AC) algorithm in lung with the Pinnacle 3 system were compared to those calculated using the Monte Carlo (MC) simulation (EGSnrc-based code). Three groups of lung phantoms, namely, "Slab", "Column" and "Cube" with different ρ e,lung (0.05-0.7), positions, volumes and shapes of lung in water were used. 6 and 18MV photon beams with 4×4 and 10×10cm 2 field sizes produced by a Varian 21EX Linac were used in the MC dose calculations. Results show that the CCC algorithm agrees well with AC to within ±1% for doses calculated in the lung phantoms, indicating that the AC, with 3-4 times less computing time required than CCC, is a good substitute for the CCC method. Comparing the CCC and AC with MC, dose deviations are found when ρ e,lung are ⩽0.1-0.3. The degree of deviation depends on the photon beam energy and field size, and is relatively large when high-energy photon beams with small field are used. For the penumbra widths (20%-80%), the CCC and AC agree well with MC for the "Slab" and "Cube" phantoms with the lung volumes at the central beam axis (CAX). However, deviations >2mm occur in the "Column" phantoms, with two lung volumes separated by a water column along the CAX, using the 18MV (4×4cm 2 ) photon beams with ρ e,lung ⩽0.1. © 2008 American Association of Physicists in Medicine.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serin, E.; Codel, G.; Mabhouti, H.

    Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less

  20. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  1. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  2. DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations

    NASA Astrophysics Data System (ADS)

    Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.

    2000-08-01

    A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.

  3. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms

    NASA Astrophysics Data System (ADS)

    Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-08-01

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  4. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms.

    PubMed

    Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal

    2017-07-17

    This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT examinations in a 320 detector-row cone-beam scanner.

  5. SU-F-J-14: Kilovoltage Cone-Beam CT Dose Estimation of Varian On-Board Imager Using GMctdospp Monte Carlo Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Rangaraj, D

    2016-06-15

    Purpose: Although cone-beam CT (CBCT) imaging became popular in radiation oncology, its imaging dose estimation is still challenging. The goal of this study is to assess the kilovoltage CBCT doses using GMctdospp - an EGSnrc based Monte Carlo (MC) framework. Methods: Two Varian OBI x-ray tube models were implemented in the GMctpdospp framework of EGSnrc MC System. The x-ray spectrum of 125 kVp CBCT beam was acquired from an EGSnrc/BEAMnrc simulation and validated with IPEM report 78. Then, the spectrum was utilized as an input spectrum in GMctdospp dose calculations. Both full and half bowtie pre-filters of the OBI systemmore » were created by using egs-prism module. The x-ray tube MC models were verified by comparing calculated dosimetric profiles (lateral and depth) to ion chamber measurements for a static x-ray beam irradiation to a cuboid water phantom. An abdominal CBCT imaging doses was simulated in GMctdospp framework using a 5-year-old anthropomorphic phantom. The organ doses and effective dose (ED) from the framework were assessed and compared to the MOSFET measurements and convolution/superposition dose calculations. Results: The lateral and depth dose profiles in the water cuboid phantom were well matched within 6% except a few areas - left shoulder of the half bowtie lateral profile and surface of water phantom. The organ doses and ED from the MC framework were found to be closer to MOSFET measurements and CS calculations within 2 cGy and 5 mSv respectively. Conclusion: This study implemented and validated the Varian OBI x-ray tube models in the GMctdospp MC framework using a cuboid water phantom and CBCT imaging doses were also evaluated in a 5-year-old anthropomorphic phantom. In future study, various CBCT imaging protocols will be implemented and validated and consequently patient CT images will be used to estimate the CBCT imaging doses in patients.« less

  6. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cebe, M; Pacaci, P; Mabhouti, H

    Purpose: In this study, the two available calculation algorithms of the Varian Eclipse treatment planning system(TPS), the electron Monte Carlo(eMC) and General Gaussian Pencil Beam(GGPB) algorithms were used to compare measured and calculated peripheral dose distribution of electron beams. Methods: Peripheral dose measurements were carried out for 6, 9, 12, 15, 18 and 22 MeV electron beams of Varian Triology machine using parallel plate ionization chamber and EBT3 films in the slab phantom. Measurements were performed for 6×6, 10×10 and 25×25cm{sup 2} cone sizes at dmax of each energy up to 20cm beyond the field edges. Using the same filmmore » batch, the net OD to dose calibration curve was obtained for each energy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution measured using parallel plate ionization chamber and EBT3 film and calculated by eMC and GGPB algorithms were compared. The measured and calculated data were then compared to find which algorithm calculates peripheral dose distribution more accurately. Results: The agreement between measurement and eMC was better than GGPB. The TPS underestimated the out of field doses. The difference between measured and calculated doses increase with the cone size. The largest deviation between calculated and parallel plate ionization chamber measured dose is less than 4.93% for eMC, but it can increase up to 7.51% for GGPB. For film measurement, the minimum gamma analysis passing rates between measured and calculated dose distributions were 98.2% and 92.7% for eMC and GGPB respectively for all field sizes and energies. Conclusion: Our results show that the Monte Carlo algorithm for electron planning in Eclipse is more accurate than previous algorithms for peripheral dose distributions. It must be emphasized that the use of GGPB for planning large field treatments with 6 MeV could lead to inaccuracies of clinical significance.« less

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  9. Full Monte Carlo-Based Biologic Treatment Plan Optimization System for Intensity Modulated Carbon Ion Therapy on Graphics Processing Unit.

    PubMed

    Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun

    2018-01-01

    One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  11. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE PAGES

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...

    2017-10-03

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  12. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  13. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-10-01

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  15. A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.

    PubMed

    Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei

    2014-12-16

    The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately calculate the dose distribution in lung cancer and can provide a notably effective tool for benchmarking the performance of other dose calculation algorithms within patients.

  16. Hybrid numerical method for solution of the radiative transfer equation in one, two, or three dimensions.

    PubMed

    Reinersman, Phillip N; Carder, Kendall L

    2004-05-01

    A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.

  17. Surface tension of undercooled liquid cobalt

    NASA Astrophysics Data System (ADS)

    Yao, W. J.; Han, X. J.; Chen, M.; Wei, B.; Guo, Z. Y.

    2002-08-01

    This paper provides the results on experimentally measured and numerically predicted surface tensions of undercooled liquid cobalt. The experiments were performed by using the oscillation drop technique combined with electromagnetic levitation. The simulations are carried out with the Monte Carlo (MC) method, where the surface tension is predicted through calculations of the work of cohesion, and the interatomic interaction is described with an embedded-atom method. The maximum undercooling of the liquid cobalt is reached at 231 K (0.13Tm) in the experiment and 268 K (0.17Tm) in the simulation. The surface tension and its relationship with temperature obtained in the experiment and simulation are σexp = 1.93 - 0.000 33 (T - T m) N m-1 and σcal = 2.26 - 0.000 32 (T - T m) N m-1 respectively. The temperature dependence of the surface tension calculated from the MC simulation is in reasonable agreement with that measured in the experiment.

  18. Comparison of Fluka-2006 Monte Carlo Simulation and Flight Data for the ATIC Detector

    NASA Technical Reports Server (NTRS)

    Gunasingha, R.M.; Fazely, A.R.; Adams, J.H.; Ahn, H.S.; Bashindzhagyan, G.L.; Chang, J.; Christl, M.; Ganel, O.; Guzik, T.G.; Isbert, J.; hide

    2007-01-01

    We have performed a detailed Monte Carlo (MC) simulation for the Advanced Thin Ionization Calorimeter (ATIC) detector using the MC code FLUKA-2006 which is capable of simulating particles up to 10 PeV. The ATIC detector has completed two successful balloon flights from McMurdo, Antarctica lasting a total of more than 35 days. ATIC is designed as a multiple, long duration balloon flight, investigation of the cosmic ray spectra from below 50 GeV to near 100 TeV total energy; using a fully active Bismuth Germanate(BGO) calorimeter. It is equipped with a large mosaic of.silicon detector pixels capable of charge identification, and, for particle tracking, three projective layers of x-y scintillator hodoscopes, located above, in the middle and below a 0.75 nuclear interaction length graphite target. Our simulations are part of an analysis package of both nuclear (A) and energy dependences for different nuclei interacting in the ATIC detector. The MC simulates the response of different components of the detector such as the Si-matrix, the scintillator hodoscopes and the BGO calorimeter to various nuclei. We present comparisons of the FLUKA-2006 MC calculations with GEANT calculations and with the ATIC CERN data and ATIC flight data.

  19. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taleei, R; Qin, N; Jiang, S

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relativemore » Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.« less

  20. A Monte Carlo simulation study of associated liquid crystals

    NASA Astrophysics Data System (ADS)

    Berardi, R.; Fehervari, M.; Zannoni, C.

    We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.

  1. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    PubMed

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Monte Carlo calculation of proton stopping power and ranges in water for therapeutic energies

    NASA Astrophysics Data System (ADS)

    Bozkurt, Ahmet

    2017-09-01

    Monte Carlo is a statistical technique for obtaining numerical solutions to physical or mathematical problems that are analytically impractical, if not impossible, to solve. For charged particle transport problems, it presents many advantages over deterministic methods since such problems require a realistic description of the problem geometry, as well as detailed tracking of every source particle. Thus, MC can be considered as a powerful alternative to the well-known Bethe-Bloche equation where an equation with various corrections is used to obtain stopping power and ranges of electrons, positrons, protons, alphas, etc. This study presents how a stochastic method such as MC can be utilized to obtain certain quantities of practical importance related to charged particle transport. Sample simulation geometries were formed for water medium where disk shaped thin detectors were employed to compute average values of absorbed dose and flux at specific distances. For each detector cell, these quantities were utilized to evaluate the values of the range and the stopping power, as well as the shape of Bragg curve, for mono-energetic point source pencil beams of protons. The results were found to be ±2% compared to the data from the NIST compilation. It is safe to conclude that this approach can be extended to determine dosimetric quantities for other media, energies and charged particle types.

  3. SU-E-T-486: Effect of the Normalized Prescription Isodose Line On Target Dose Deficiency in Lung SBRT Based On Monte Carlo Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, D; Zhang, Q; Zhou, S

    Purpose: To investigate the impact of normalized prescription isodose line on target dose deficiency calculated with Monte Carlo (MC) vs. pencil Beam (PB) in lung SBRT. RTOG guidelines recommend prescription lines between 60% and 90% for lung SBRT. How this affects the magnitude of MC-calculated target dose deficiency has never been studied. Methods: Under an IRB-approved protocol, four lung SBRT patients were replanned following RTOG0813 by a single physicist. For each patient, four alternative plans were generated based on PB calculation prescribing to 60–90% isodose lines, respectively. Each plan consisted of 360o coplanar dynamic conformal arcs with beam apertures manuallymore » optimized to achieve similar dose coverage and conformity for all plans of the same patient. Dose distribution was calculated with MC and compared to that with PB. PTV dose-volume endpoints were compared, including Dmin, D5, Dmean, D95, and Dmax. PTV V100 coverage, conformity index (CI), and heterogeneity index (HI) were also evaluated. Results: For all 16 plans, median (range) PTV V100 and CI were 99.7% (97.5–100%) and 1.27 (1.20–1.41), respectively. As expected, lower prescription line resulted in higher target dose heterogeneity, yielding median (range) HI of 1.26 (1.05–1.51) for all plans. Comparing MC to PB, median (range) D95, Dmean, D5 PTV dose deficiency were 18.9% (11.2–23.2%), 15.6% (10.0–22.7%), and 9.4%(5.5–13.6%) of the prescription dose, respectively. The Dmean, D5, and Dmax deficiency was found to monotonically increase with decreasing prescription line from 90% to 60%, while the Dmin deficiency monotonically decreased. D95 deficiency exhibited more complex trend, reaching the largest deficiency at 80% for all patients. Conclusion: Dependence on prescription isodose line was found for MC-calculated PTV dose deficiency of lung SBRT. When comparing reported MC dose deficiency values from different institutions, their individual selections of prescription line should be considered in addition to other factors affecting the deficiency magnitude.« less

  4. MO-FG-CAMPUS-TeP3-03: Calculation of Proton Pencil Beam Properties with Full Beamline Model in TOPAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wulff, J; Abel, E

    2016-06-15

    Purpose: Introducing Monte Carlo based dose calculation algorithms into proton therapy planning systems (TPS) leads to improved accuracy. However accurate modelling of the proton pencil beam impinging the patient is necessary. Current approaches rely on measurement-driven reconstruction of phase-space and spectrum properties, typically constrained to analytical model functions. In this study a detailed Monte Carlo model of the complete cyclotron-based delivery system was created with the aim of providing more representative beam properties at treatment position. Methods: A model of the Varian Probeam proton system from the cyclotron exit to isocenter was constructed in the TOPAS Monte Carlo framework. Themore » beam evolution through apertures and magnetic elements was validated using Transport/Turtle calculations and additionally against measurements from the Probeam™ system at Scripps Proton Therapy Center (SPTC) in San Diego, CA. A voxelized water phantom at isocenter allowed for comparison of the dose-depth curve from the Probeam model with that of a corresponding Gaussian beam over the entire energy range (70–240 MeV). Measurements of relative beam fluence cross-profiles and depth-dose curves at and around isocenter were also compared to the MC results. Results: The simulated TOPAS beam envelope was found to agree with both the Transport/Turtle and measurements to within 5% for most of the beamline. The MC predicted energy spectrum at isocenter was found to deviate increasingly from Gaussian at energies below 160 MeV. The corresponding effects on the depth dose curve agreed well with measurements. Conclusion: Given the flexibility of TOPAS and available details of the delivery system, an accurate characterization of a proton pencil beam at isocenter is possible. Incorporation of the MC derived properties of the proton pencil beam can eliminate analytical approximations and ultimately increase treatment plan accuracy and quality. Both authors are employees of Varian Medical Systems.« less

  5. MC EMiNEM Maps the Interaction Landscape of the Mediator

    PubMed Central

    Niederberger, Theresa; Etzold, Stefanie; Lidschreiber, Michael; Maier, Kerstin C.; Martin, Dietmar E.; Fröhlich, Holger; Cramer, Patrick; Tresch, Achim

    2012-01-01

    The Mediator is a highly conserved, large multiprotein complex that is involved essentially in the regulation of eukaryotic mRNA transcription. It acts as a general transcription factor by integrating regulatory signals from gene-specific activators or repressors to the RNA Polymerase II. The internal network of interactions between Mediator subunits that conveys these signals is largely unknown. Here, we introduce MC EMiNEM, a novel method for the retrieval of functional dependencies between proteins that have pleiotropic effects on mRNA transcription. MC EMiNEM is based on Nested Effects Models (NEMs), a class of probabilistic graphical models that extends the idea of hierarchical clustering. It combines mode-hopping Monte Carlo (MC) sampling with an Expectation-Maximization (EM) algorithm for NEMs to increase sensitivity compared to existing methods. A meta-analysis of four Mediator perturbation studies in Saccharomyces cerevisiae, three of which are unpublished, provides new insight into the Mediator signaling network. In addition to the known modular organization of the Mediator subunits, MC EMiNEM reveals a hierarchical ordering of its internal information flow, which is putatively transmitted through structural changes within the complex. We identify the N-terminus of Med7 as a peripheral entity, entailing only local structural changes upon perturbation, while the C-terminus of Med7 and Med19 appear to play a central role. MC EMiNEM associates Mediator subunits to most directly affected genes, which, in conjunction with gene set enrichment analysis, allows us to construct an interaction map of Mediator subunits and transcription factors. PMID:22737066

  6. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  7. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  8. Poster — Thur Eve — 43: Monte Carlo Modeling of Flattening Filter Free Beams and Studies of Relative Output Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Lixin; Jiang, Runqing; Osei, Ernest K.

    2014-08-15

    Flattening filter free (FFF) beams have been adopted by many clinics and used for patient treatment. However, compared to the traditional flattened beams, we have limited knowledge of FFF beams. In this study, we successfully modeled the 6 MV FFF beam for Varian TrueBeam accelerator with the Monte Carlo (MC) method. Both the percentage depth dose and profiles match well to the Golden Beam Data (GBD) from Varian. MC simulations were then performed to predict the relative output factors. The in-water output ratio, Scp, was simulated in water phantom and data obtained agrees well with GBD. The in-air output ratio,more » Sc, was obtained by analyzing the phase space placed at isocenter, in air, and computing the ratio of water Kerma rates for different field sizes. The phantom scattering factor, Sp, can then be obtained from the traditional way of taking the ratio of Scp and Sc. We also simulated Sp using a recently proposed method based on only the primary beam dose delivery in water phantom. Because there is no concern of lateral electronic disequilibrium, this method is more suitable for small fields. The results from both methods agree well with each other. The flattened 6 MV beam was simulated and compared to 6 MV FFF. The comparison confirms that 6 MV FFF has less scattering from the Linac head and less phantom scattering contribution to the central axis dose, which will be helpful for improving accuracy in beam modeling and dose calculation in treatment planning systems.« less

  9. On Correlated-noise Analyses Applied to Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison

    2017-01-01

    Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.

  10. Feasibility of using Geant4 Monte Carlo simulation for IMRT dose calculations for the Novalis Tx with a HD-120 multi-leaf collimator

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Shin, Jungsuk; Chung, Kwangzoo; Han, Youngyih; Kim, Jinsung; Choi, Doo Ho

    2015-05-01

    The aim of this study was to develop an independent dose verification system by using a Monte Carlo (MC) calculation method for intensity modulated radiation therapy (IMRT) conducted by using a Varian Novalis Tx (Varian Medical Systems, Palo Alto, CA, USA) equipped with a highdefinition multi-leaf collimator (HD-120 MLC). The Geant4 framework was used to implement a dose calculation system that accurately predicted the delivered dose. For this purpose, the Novalis Tx Linac head was modeled according to the specifications acquired from the manufacturer. Subsequently, MC simulations were performed by varying the mean energy, energy spread, and electron spot radius to determine optimum values of irradiation with 6-MV X-ray beams by using the Novalis Tx system. Computed percentage depth dose curves (PDDs) and lateral profiles were compared to the measurements obtained by using an ionization chamber (CC13). To validate the IMRT simulation by using the MC model we developed, we calculated a simple IMRT field and compared the result with the EBT3 film measurements in a water-equivalent solid phantom. Clinical cases, such as prostate cancer treatment plans, were then selected, and MC simulations were performed. The accuracy of the simulation was assessed against the EBT3 film measurements by using a gamma-index criterion. The optimal MC model parameters to specify the beam characteristics were a 6.8-MeV mean energy, a 0.5-MeV energy spread, and a 3-mm electron radius. The accuracy of these parameters was determined by comparison of MC simulations with measurements. The PDDs and the lateral profiles of the MC simulation deviated from the measurements by 1% and 2%, respectively, on average. The computed simple MLC fields agreed with the EBT3 measurements with a 95% passing rate with 3%/3-mm gamma-index criterion. Additionally, in applying our model to clinical IMRT plans, we found that the MC calculations and the EBT3 measurements agreed well with a passing rate of greater than 95% on average with a 3%/3-mm gamma-index criterion. In summary, the Novalis Tx Linac head equipped with a HD-120 MLC was successfully modeled by using a Geant4 platform, and the accuracy of the Geant4 platform was successfully validated by comparisons with measurements. The MC model we have developed can be a useful tool for pretreatment quality assurance of IMRT plans and for commissioning of radiotherapy treatment planning.

  11. Simulating complex atomistic processes: On-the-fly kinetic Monte Carlo scheme with selective active volumes

    NASA Astrophysics Data System (ADS)

    Xu, Haixuan; Osetsky, Yury N.; Stoller, Roger E.

    2011-10-01

    An accelerated atomistic kinetic Monte Carlo (KMC) approach for evolving complex atomistic structures has been developed. The method incorporates on-the-fly calculations of transition states (TSs) with a scheme for defining active volumes (AVs) in an off-lattice (relaxed) system. In contrast to conventional KMC models that require all reactions to be predetermined, this approach is self-evolving and any physically relevant motion or reaction may occur. Application of this self-evolving atomistic kinetic Monte Carlo (SEAK-MC) approach is illustrated by predicting the evolution of a complex defect configuration obtained in a molecular dynamics (MD) simulation of a displacement cascade in Fe. Over much longer times, it was shown that interstitial clusters interacting with other defects may change their structure, e.g., from glissile to sessile configuration. The direct comparison with MD modeling confirms the atomistic fidelity of the approach, while the longer time simulation demonstrates the unique capability of the model.

  12. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling

    PubMed Central

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586

  13. Raman Monte Carlo simulation for light propagation for tissue with embedded objects

    NASA Astrophysics Data System (ADS)

    Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit

    2018-02-01

    Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.

  14. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less

  15. Reactive Monte Carlo sampling with an ab initio potential

    DOE PAGES

    Leiding, Jeff; Coe, Joshua D.

    2016-05-04

    Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH 3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state spacemore » for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less

  16. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation

    NASA Astrophysics Data System (ADS)

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics.

  17. The effect of statistical noise on IMRT plan quality and convergence for MC-based and MC-correction-based optimized treatment plans.

    PubMed

    Siebers, Jeffrey V

    2008-04-04

    Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.

  18. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  19. On the definition of a Monte Carlo model for binary crystal growth.

    PubMed

    Los, J H; van Enckevort, W J P; Meekes, H; Vlieg, E

    2007-02-01

    We show that consistency of the transition probabilities in a lattice Monte Carlo (MC) model for binary crystal growth with the thermodynamic properties of a system does not guarantee the MC simulations near equilibrium to be in agreement with the thermodynamic equilibrium phase diagram for that system. The deviations remain small for systems with small bond energies, but they can increase significantly for systems with large melting entropy, typical for molecular systems. These deviations are attributed to the surface kinetics, which is responsible for a metastable zone below the liquidus line where no growth occurs, even in the absence of a 2D nucleation barrier. Here we propose an extension of the MC model that introduces a freedom of choice in the transition probabilities while staying within the thermodynamic constraints. This freedom can be used to eliminate the discrepancy between the MC simulations and the thermodynamic equilibrium phase diagram. Agreement is achieved for that choice of the transition probabilities yielding the fastest decrease of the free energy (i.e., largest growth rate) of the system at a temperature slightly below the equilibrium temperature. An analytical model is developed, which reproduces quite well the MC results, enabling a straightforward determination of the optimal set of transition probabilities. Application of both the MC and analytical model to conditions well away from equilibrium, giving rise to kinetic phase diagrams, shows that the effect of kinetics on segregation is even stronger than that predicted by previous models.

  20. Speckle-field propagation in 'frozen' turbulence: brightness function approach

    NASA Astrophysics Data System (ADS)

    Dudorov, Vadim V.; Vorontsov, Mikhail A.; Kolosov, Valeriy V.

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  1. Speckle-field propagation in 'frozen' turbulence: brightness function approach.

    PubMed

    Dudorov, Vadim V; Vorontsov, Mikhail A; Kolosov, Valeriy V

    2006-08-01

    Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.

  2. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.

  3. Methods for calculating the absolute entropy and free energy of biological systems based on ideas from polymer physics.

    PubMed

    Meirovitch, Hagai

    2010-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.

  4. EDITORIAL: Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) Special section: Selected papers from the Third European Workshop on Monte Carlo Treatment Planning (MCTP2012)

    NASA Astrophysics Data System (ADS)

    Spezi, Emiliano; Leal, Antonio

    2013-04-01

    The Third European Workshop on Monte Carlo Treatment Planning (MCTP2012) was held from 15-18 May, 2012 in Seville, Spain. The event was organized by the Universidad de Sevilla with the support of the European Workgroup on Monte Carlo Treatment Planning (EWG-MCTP). MCTP2012 followed two successful meetings, one held in Ghent (Belgium) in 2006 (Reynaert 2007) and one in Cardiff (UK) in 2009 (Spezi 2010). The recurrence of these workshops together with successful events held in parallel by McGill University in Montreal (Seuntjens et al 2012), show consolidated interest from the scientific community in Monte Carlo (MC) treatment planning. The workshop was attended by a total of 90 participants, mainly coming from a medical physics background. A total of 48 oral presentations and 15 posters were delivered in specific scientific sessions including dosimetry, code development, imaging, modelling of photon and electron radiation transport, external beam radiation therapy, nuclear medicine, brachitherapy and hadrontherapy. A copy of the programme is available on the workshop's website (www.mctp2012.com). In this special section of Physics in Medicine and Biology we report six papers that were selected following the journal's rigorous peer review procedure. These papers actually provide a good cross section of the areas of application of MC in treatment planning that were discussed at MCTP2012. Czarnecki and Zink (2013) and Wagner et al (2013) present the results of their work in small field dosimetry. Czarnecki and Zink (2013) studied field size and detector dependent correction factors for diodes and ion chambers within a clinical 6MV photon beam generated by a Siemens linear accelerator. Their modelling work based on the BEAMnrc/EGSnrc codes and experimental measurements revealed that unshielded diodes were the best choice for small field dosimetry because of their independence from the electron beam spot size and correction factor close to unity. Wagner et al (2013) investigated the recombination effect on liquid ionization chambers for stereotactic radiotherapy, a field of increasing importance in external beam radiotherapy. They modelled both radiation source (Cyberknife unit) and detector with the BEAMnrc/EGSnrc codes and quantified the dependence of the response of this type of detectors on factors such as the volume effect and the electrode. They also recommended that these dependences be accounted for in measurements involving small fields. In the field of external beam radiotherapy, Chakarova et al (2013) showed how total body irradiation (TBI) could be improved by simulating patient treatments with MC. In particular, BEAMnrc/EGSnrc based simulations highlighted the importance of optimizing individual compensators for TBI treatments. In the same area of application, Mairani et al (2013) reported on a new tool for treatment planning in proton therapy based on the FLUKA MC code. The software, used to model both proton therapy beam and patient anatomy, supports single-field and multiple-field optimization and can be used to optimize physical and relative biological effectiveness (RBE)-weighted dose distribution, using both constant and variable RBE models. In the field of nuclear medicine Marcatili et al (2013) presented RAYDOSE, a Geant4-based code specifically developed for applications in molecular radiotherapy (MRT). RAYDOSE has been designed to work in MRT trials using sequential positron emission tomography (PET) or single-photon emission tomography (SPECT) imaging to model patient specific time-dependent metabolic uptake and to calculate the total 3D dose distribution. The code was validated through experimental measurements in homogeneous and heterogeneous phantoms. Finally, in the field of code development Miras et al (2013) reported on CloudMC, a Windows Azure-based application for the parallelization of MC calculations in a dynamic cluster environment. Although the performance of CloudMC has been tested with the PENELOPE MC code, the authors report that software has been designed in a way that it should be independent of the type of MC code, provided that simulation meets a number of operational criteria. We wish to thank Elekta/CMS Inc., the University of Seville, the Junta of Andalusia and the European Regional Development Fund for their financial support. We would like also to acknowledge the members of EWG-MCTP for their help in peer-reviewing all the abstracts, and all the invited speakers who kindly agreed to deliver keynote presentations in their area of expertise. A final word of thanks to our colleagues who worked on the reviewing process of the papers selected for this special section and to the IOP Publishing staff who made it possible. MCTP2012 was accredited by the European Federation of Organisations for Medical Physics as a CPD event for medical physicists. Emiliano Spezi and Antonio Leal Guest Editors References Chakarova R, Müntzing K, Krantz M, E Hedin E and Hertzman S 2013 Monte Carlo optimization of total body irradiation in a phantom and patient geometry Phys. Med. Biol. 58 2461-69 Czarnecki D and Zink K 2013 Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields Phys. Med. Biol. 58 2431-44 Mairani A, Böhlen T T, Schiavi A, Tessonnier T, Molinelli S, Brons S, Battistoni G, Parodi K and Patera V 2013 A Monte Carlo-based treatment planning tool for proton therapy Phys. Med. Biol. 58 2471-90 Marcatili S, Pettinato C, Daniels S, Lewis G, Edwards P, Fanti S and Spezi E 2013 Development and validation of RAYDOSE: a Geant4 based application for molecular radiotherapy Phys. Med. Biol. 58 2491-508 Miras H, Jiménez R, Miras C and Gomà C 2013 CloudMC: A cloud computing application for Monte Carlo simulation Phys. Med. Biol. 58 N125-33 Reynaert N 2007 First European Workshop on Monte Carlo Treatment Planning J. Phys.: Conf. Ser. 74 011001 Seuntjens J, Beaulieu L, El Naqa I and Després P 2012 Special section: Selected papers from the Fourth International Workshop on Recent Advances in Monte Carlo Techniques for Radiation Therapy Phys. Med. Biol. 57 (11) E01 Spezi E 2010 Special section: Selected papers from the Second European Workshop on Monte Carlo Treatment Planning (MCTP2009) Phys. Med. Biol. 55 (16) E01 Wagner A, Crop F, Lacornerie T, Vandevelde F and Reynaert N 2013 Use of a liquid ionization chamber for stereotactic radiotherapy dosimetry Phys. Med. Biol. 58 2445-59

  5. EDITORIAL: International Workshop on Current Topics in Monte Carlo Treatment Planning

    NASA Astrophysics Data System (ADS)

    Verhaegen, Frank; Seuntjens, Jan

    2005-03-01

    The use of Monte Carlo particle transport simulations in radiotherapy was pioneered in the early nineteen-seventies, but it was not until the eighties that they gained recognition as an essential research tool for radiation dosimetry, health physics and later on for radiation therapy treatment planning. Since the mid-nineties, there has been a boom in the number of workers using MC techniques in radiotherapy, and the quantity of papers published on the subject. Research and applications of MC techniques in radiotherapy span a very wide range from fundamental studies of cross sections and development of particle transport algorithms, to clinical evaluation of treatment plans for a variety of radiotherapy modalities. The International Workshop on Current Topics in Monte Carlo Treatment Planning took place at Montreal General Hospital, which is part of McGill University, halfway up Mount Royal on Montreal Island. It was held from 3-5 May, 2004, right after the freezing winter has lost its grip on Canada. About 120 workers attended the Workshop, representing 18 countries. Most of the pioneers in the field were present but also a large group of young scientists. In a very full programme, 41 long papers were presented (of which 12 were invited) and 20 posters were on display during the whole meeting. The topics covered included the latest developments in MC algorithms, statistical issues, source modelling and MC treatment planning for photon, electron and proton treatments. The final day was entirely devoted to clinical implementation issues. Monte Carlo radiotherapy treatment planning has only now made a slow entrée in the clinical environment, taking considerably longer than envisaged ten years ago. Of the twenty-five papers in this dedicated special issue, about a quarter deal with this topic, with probably many more studies to follow in the near future. If anything, we hope the Workshop served as an accelerator for more clinical evaluation of MC applications. The remainder of the papers in this issue demonstrate that there is still plenty of work to be undertaken on other topics such as source modelling, calculation speed, data analysis, and development of user-friendly applications. We acknowledge the financial support of the National Cancer Institute of Canada, the Institute of Cancer Research of the Canadian Institutes of Health Research, the Research Grants Office and the Post Graduate Student Society of McGill University, and the Institute of Physics Publishing (IOPP). A final word of thanks goes out to all of those who contributed to the successful Workshop: our local medical physics students and staff, the many colleagues who acted as guest associate editors for the reviewing process, the IOPP staff, and the authors who generated new and exciting work.

  6. Improved QM Methods and Their Application in QM/MM Studies of Enzymatic Reactions

    NASA Astrophysics Data System (ADS)

    Jorgensen, William L.

    2007-03-01

    Quantum mechanics (QM) and Monte Carlo statistical mechanics (MC) simulations have been used by us since the early 1980s to study reaction mechanisms and the origin of solvent effects on reaction rates. A goal was always to perform the QM and MC/MM calculations simultaneously in order to obtain free-energy surfaces in solution with no geometrical restrictions. This was achieved by 2002 and complete free-energy profiles and surfaces with full sampling of solute and solvent coordinates can now be obtained through one job submission using BOSS [JCC 2005, 26, 1689]. Speed and accuracy demands also led to development of the improved semiempirical QM method, PDDG-PM3 [JCC 1601 (2002); JCTC 817 (2005)]. The combined PDDG-PM3/MC/FEP methodology has provided excellent results for free energies of activation for many reactions in numerous solvents. Recent examples include Cope, Kemp and E1cb eliminations [JACS 8829 (2005), 6141 (2006); JOC 4896 (2006)], as well as enzymatic reactions catalyzed by the putative Diels-Alderase, macrophomate synthase, and fatty-acid amide hydrolase [JACS 3577 (2005); JACS (2006)]. The presentation will focus on the accuracy that is currently achievable in such QM/MM studies and the accuracy of the underlying QM methodology including extensive comparisons of results from PDDG-PM3 and ab initio DFT methods.

  7. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    DOE PAGES

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris; ...

    2015-06-08

    Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less

  8. Parallelized traveling cluster approximation to study numerically spin-fermion models on large lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Anamitra; Patel, Niravkumar D.; Bishop, Chris

    Lattice spin-fermion models are quite important to study correlated systems where quantum dynamics allows for a separation between slow and fast degrees of freedom. The fast degrees of freedom are treated quantum mechanically while the slow variables, generically referred to as the “spins,” are treated classically. At present, exact diagonalization coupled with classical Monte Carlo (ED + MC) is extensively used to solve numerically a general class of lattice spin-fermion problems. In this common setup, the classical variables (spins) are treated via the standard MC method while the fermion problem is solved by exact diagonalization. The “traveling cluster approximation” (TCA)more » is a real space variant of the ED + MC method that allows to solve spin-fermion problems on lattice sizes with up to 10 3 sites. In this paper, we present a novel reorganization of the TCA algorithm in a manner that can be efficiently parallelized. Finally, this allows us to solve generic spin-fermion models easily on 10 4 lattice sites and with some effort on 10 5 lattice sites, representing the record lattice sizes studied for this family of models.« less

  9. Fast determination of the spatially distributed photon fluence for light dose evaluation of PDT

    NASA Astrophysics Data System (ADS)

    Zhao, Kuanxin; Chen, Weiting; Li, Tongxin; Yan, Panpan; Qin, Zhuanping; Zhao, Huijuan

    2018-02-01

    Photodynamic therapy (PDT) has shown superiorities of noninvasiveness and high-efficiency in the treatment of early-stage skin cancer. Rapid and accurate determination of spatially distributed photon fluence in turbid tissue is essential for the dosimetry evaluation of PDT. It is generally known that photon fluence can be accurately obtained by Monte Carlo (MC) methods, while too much time would be consumed especially for complex light source mode or online real-time dosimetry evaluation of PDT. In this work, a method to rapidly calculate spatially distributed photon fluence in turbid medium is proposed implementing a classical perturbation and iteration theory on mesh Monte Carlo (MMC). In the proposed method, photon fluence can be obtained by superposing a perturbed and iterative solution caused by the defects in turbid medium to an unperturbed solution for the background medium and therefore repetitive MMC simulations can be avoided. To validate the method, a non-melanoma skin cancer model is carried out. The simulation results show the solution of photon fluence can be obtained quickly and correctly by perturbation algorithm.

  10. Application of a Monte Carlo framework with bootstrapping for quantification of uncertainty in baseline map of carbon emissions from deforestation in Tropical Regions

    Treesearch

    William Salas; Steve Hagen

    2013-01-01

    This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...

  11. Stopping power and dose calculations with analytical and Monte Carlo methods for protons and prompt gamma range verification

    NASA Astrophysics Data System (ADS)

    Usta, Metin; Tufan, Mustafa Çağatay; Aydın, Güral; Bozkurt, Ahmet

    2018-07-01

    In this study, we have performed the calculations stopping power, depth dose, and range verification for proton beams using dielectric and Bethe-Bloch theories and FLUKA, Geant4 and MCNPX Monte Carlo codes. In the framework, as analytical studies, Drude model was applied for dielectric theory and effective charge approach with Roothaan-Hartree-Fock charge densities was used in Bethe theory. In the simulations different setup parameters were selected to evaluate the performance of three distinct Monte Carlo codes. The lung and breast tissues were investigated are considered to be related to the most common types of cancer throughout the world. The results were compared with each other and the available data in literature. In addition, the obtained results were verified with prompt gamma range data. In both stopping power values and depth-dose distributions, it was found that the Monte Carlo values give better results compared with the analytical ones while the results that agree best with ICRU data in terms of stopping power are those of the effective charge approach between the analytical methods and of the FLUKA code among the MC packages. In the depth dose distributions of the examined tissues, although the Bragg curves for Monte Carlo almost overlap, the analytical ones show significant deviations that become more pronounce with increasing energy. Verifications with the results of prompt gamma photons were attempted for 100-200 MeV protons which are regarded important for proton therapy. The analytical results are within 2%-5% and the Monte Carlo values are within 0%-2% as compared with those of the prompt gammas.

  12. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  13. SU-C-204-06: Monte Carlo Dose Calculation for Kilovoltage X-Ray-Psoralen Activated Cancer Therapy (X-PACT): Preliminary Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mein, S; Gunasingha, R; Nolan, M

    Purpose: X-PACT is an experimental cancer therapy where kV x-rays are used to photo-activate anti-cancer therapeutics through phosphor intermediaries (phosphors that absorb x-rays and re-radiate as UV light). Clinical trials in pet dogs are currently underway (NC State College of Veterinary Medicine) and an essential component is the ability to model the kV dose in these dogs. Here we report the commissioning and characterization of a Monte Carlo (MC) treatment planning simulation tool to calculate X-PACT radiation doses in canine trials. Methods: FLUKA multi-particle MC simulation package was used to simulate a standard X-PACT radiation treatment beam of 80kVp withmore » the Varian OBI x-ray source geometry. The beam quality was verified by comparing measured and simulated attenuation of the beam by various thicknesses of aluminum (2–4.6 mm) under narrow beam conditions (HVL). The beam parameters at commissioning were then corroborated using MC, characterized and verified with empirically collected commissioning data, including: percent depth dose curves (PDD), back-scatter factors (BSF), collimator scatter factor(s), and heel effect, etc. All simulations were conducted for N=30M histories at M=100 iterations. Results: HVL and PDD simulation data agreed with an average percent error of 2.42%±0.33 and 6.03%±1.58, respectively. The mean square error (MSE) values for HVL and PDD (0.07% and 0.50%) were low, as expected; however, longer simulations are required to validate convergence to the expected values. Qualitatively, pre- and post-filtration source spectra matched well with 80kVp references generated via SPEKTR software. Further validation of commissioning data simulation is underway in preparation for first-time 3D dose calculations with canine CBCT data. Conclusion: We have prepared a Monte Carlo simulation capable of accurate dose calculation for use with ongoing X-PACT canine clinical trials. Preliminary results show good agreement with measured data and hold promise for accurate quantification of dose for this novel psoralen X-ray therapy. Funding Support, Disclosures, & Conflict of Interest: The Monte Carlo simulation work was not funded; Drs. Adamson & Oldham have received funding from Immunolight LLC for X-PACT research.« less

  14. Monte Carlo based, patient-specific RapidArc QA using Linac log files.

    PubMed

    Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu

    2010-01-01

    A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and flexibility of the Monte Carlo based RapidArc QA system were demonstrated. Good machine performance and accurate dose distribution delivery of RapidArc plans were observed. The sampling used in the TPS optimization algorithm was found to be adequate.

  15. Mean transverse momenta correlations in hadron-hadron collisions in MC toy model with repulsing strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altsybeev, Igor

    2016-01-22

    In the present work, Monte-Carlo toy model with repulsing quark-gluon strings in hadron-hadron collisions is described. String repulsion creates transverse boosts for the string decay products, giving modifications of observables. As an example, long-range correlations between mean transverse momenta of particles in two observation windows are studied in MC toy simulation of the heavy-ion collisions.

  16. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  17. Monte Carlo proton dose calculations using a radiotherapy specific dual-energy CT scanner for tissue segmentation and range assessment

    NASA Astrophysics Data System (ADS)

    Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank

    2018-06-01

    Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with  >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in  ‑1.2  ±  1.2 mm (‑0.5%  ±  0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.

  18. SU-G-TeP1-06: Fast GPU Framework for Four-Dimensional Monte Carlo in Adaptive Intensity Modulated Proton Therapy (IMPT) for Mobile Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botas, P; Heidelberg University, Heidelberg; Grassberger, C

    Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) treatment planning and verification using four-dimensional CT (4DCT) for adaptive IMPT for lung cancer patients. Methods: A validated GPU MC code, gPMC, has been linked to the patient database at our institution and employed to compute the dose-influence matrices (Dij) on the planning CT (pCT). The pCT is an average of the respiratory motion of the patient. The Dijs and patient structures were fed to the optimizer to calculate a treatment plan. To validate the plan against motion, a 4D dose distribution averaged over the possible starting phases is calculatedmore » using the 4DCT and a model of the time structure of the delivered spot map. The dose is accumulated using vector maps created by a GPU-accelerated deformable image registration program (DIR) from each phase of the 4DCT to the reference phase using the B-spline method. Calculation of the Dij matrices and the DIR are performed on a cluster, with each field and vector map calculated in parallel. Results: The Dij production takes ∼3.5s per beamlet for 10e6 protons, depending on the energy and the CT size. Generating a plan with 4D simulation of 1000 spots in 4 fields takes approximately 1h. To test the framework, IMPT plans for 10 lung cancer patients were generated for validation. Differences between the planned and the delivered dose of 19% in dose to some organs at risk and 1.4/21.1% in target mean dose/homogeneity with respect to the plan were observed, suggesting potential for improvement if adaptation is considered. Conclusion: A fast MC treatment planning framework has been developed that allows reliable plan design and verification for mobile targets and adaptation of treatment plans. This will significantly impact treatments for lung tumors, as 4D-MC dose calculations can now become part of planning strategies.« less

  19. SU-E-T-626: Accuracy of Dose Calculation Algorithms in MultiPlan Treatment Planning System in Presence of Heterogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C; Huet, C; Barraux, V

    Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less

  20. TU-AB-BRC-09: Fast Dose-Averaged LET and Biological Dose Calculations for Proton Therapy Using Graphics Cards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, H; Tseung, Chan; Beltran, C

    Purpose: To demonstrate fast and accurate Monte Carlo (MC) calculations of proton dose-averaged linear energy transfer (LETd) and biological dose (BD) on a Graphics Processing Unit (GPU) card. Methods: A previously validated GPU-based MC simulation of proton transport was used to rapidly generate LETd distributions for proton treatment plans. Since this MC handles proton-nuclei interactions on an event-by-event using a Bertini intranuclear cascade-evaporation model, secondary protons were taken into account. The smaller contributions of secondary neutrons and recoil nuclei were ignored. Recent work has shown that LETd values are sensitive to the scoring method. The GPU-based LETd calculations were verifiedmore » by comparing with a TOPAS custom scorer that uses tabulated stopping powers, following recommendations by other authors. Comparisons were made for prostate and head-and-neck patients. A python script is used to convert the MC-generated LETd distributions to BD using a variety of published linear quadratic models, and to export the BD in DICOM format for subsequent evaluation. Results: Very good agreement is obtained between TOPAS and our GPU MC. Given a complex head-and-neck plan with 1 mm voxel spacing, the physical dose, LETd and BD calculations for 10{sup 8} proton histories can be completed in ∼5 minutes using a NVIDIA Titan X card. The rapid turnover means that MC feedback can be obtained on dosimetric plan accuracy as well as BD hotspot locations, particularly in regards to their proximity to critical structures. In our institution the GPU MC-generated dose, LETd and BD maps are used to assess plan quality for all patients undergoing treatment. Conclusion: Fast and accurate MC-based LETd calculations can be performed on the GPU. The resulting BD maps provide valuable feedback during treatment plan review. Partially funded by Varian Medical Systems.« less

  1. A method for photon beam Monte Carlo multileaf collimator particle transport

    NASA Astrophysics Data System (ADS)

    Siebers, Jeffrey V.; Keall, Paul J.; Kim, Jong Oh; Mohan, Radhe

    2002-09-01

    Monte Carlo (MC) algorithms are recognized as the most accurate methodology for patient dose assessment. For intensity-modulated radiation therapy (IMRT) delivered with dynamic multileaf collimators (DMLCs), accurate dose calculation, even with MC, is challenging. Accurate IMRT MC dose calculations require inclusion of the moving MLC in the MC simulation. Due to its complex geometry, full transport through the MLC can be time consuming. The aim of this work was to develop an MLC model for photon beam MC IMRT dose computations. The basis of the MC MLC model is that the complex MLC geometry can be separated into simple geometric regions, each of which readily lends itself to simplified radiation transport. For photons, only attenuation and first Compton scatter interactions are considered. The amount of attenuation material an individual particle encounters while traversing the entire MLC is determined by adding the individual amounts from each of the simplified geometric regions. Compton scatter is sampled based upon the total thickness traversed. Pair production and electron interactions (scattering and bremsstrahlung) within the MLC are ignored. The MLC model was tested for 6 MV and 18 MV photon beams by comparing it with measurements and MC simulations that incorporate the full physics and geometry for fields blocked by the MLC and with measurements for fields with the maximum possible tongue-and-groove and tongue-or-groove effects, for static test cases and for sliding windows of various widths. The MLC model predicts the field size dependence of the MLC leakage radiation within 0.1% of the open-field dose. The entrance dose and beam hardening behind a closed MLC are predicted within +/-1% or 1 mm. Dose undulations due to differences in inter- and intra-leaf leakage are also correctly predicted. The MC MLC model predicts leaf-edge tongue-and-groove dose effect within +/-1% or 1 mm for 95% of the points compared at 6 MV and 88% of the points compared at 18 MV. The dose through a static leaf tip is also predicted generally within +/-1% or 1 mm. Tests with sliding windows of various widths confirm the accuracy of the MLC model for dynamic delivery and indicate that accounting for a slight leaf position error (0.008 cm for our MLC) will improve the accuracy of the model. The MLC model developed is applicable to both dynamic MLC and segmental MLC IMRT beam delivery and will be useful for patient IMRT dose calculations, pre-treatment verification of IMRT delivery and IMRT portal dose transmission dosimetry.

  2. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

    NASA Astrophysics Data System (ADS)

    Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

    2014-03-01

    Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

  3. Towards real-time photon Monte Carlo dose calculation in the cloud

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Kozin, Igor N.; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-01

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  4. Towards real-time photon Monte Carlo dose calculation in the cloud.

    PubMed

    Ziegenhein, Peter; Kozin, Igor N; Kamerling, Cornelis Ph; Oelfke, Uwe

    2017-06-07

    Near real-time application of Monte Carlo (MC) dose calculation in clinic and research is hindered by the long computational runtimes of established software. Currently, fast MC software solutions are available utilising accelerators such as graphical processing units (GPUs) or clusters based on central processing units (CPUs). Both platforms are expensive in terms of purchase costs and maintenance and, in case of the GPU, provide only limited scalability. In this work we propose a cloud-based MC solution, which offers high scalability of accurate photon dose calculations. The MC simulations run on a private virtual supercomputer that is formed in the cloud. Computational resources can be provisioned dynamically at low cost without upfront investment in expensive hardware. A client-server software solution has been developed which controls the simulations and transports data to and from the cloud efficiently and securely. The client application integrates seamlessly into a treatment planning system. It runs the MC simulation workflow automatically and securely exchanges simulation data with the server side application that controls the virtual supercomputer. Advanced encryption standards were used to add an additional security layer, which encrypts and decrypts patient data on-the-fly at the processor register level. We could show that our cloud-based MC framework enables near real-time dose computation. It delivers excellent linear scaling for high-resolution datasets with absolute runtimes of 1.1 seconds to 10.9 seconds for simulating a clinical prostate and liver case up to 1% statistical uncertainty. The computation runtimes include the transportation of data to and from the cloud as well as process scheduling and synchronisation overhead. Cloud-based MC simulations offer a fast, affordable and easily accessible alternative for near real-time accurate dose calculations to currently used GPU or cluster solutions.

  5. Commissioning and Validation of the First Monte Carlo Based Dose Calculation Algorithm Commercial Treatment Planning System in Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.

    2010-12-07

    This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm{sup 2}). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mmmore » were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm{sup 2}. Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm{sup 2}) only 92% of the data meet the criteria. Total scatter factors show a good agreement (<2.6%) between MC calculated and measured data, except for the smaller fields (12x12 and 6x6 mm{sup 2}) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm{sup 2}. Special care must be taken for smaller fields.« less

  6. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  7. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less

  8. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    NASA Astrophysics Data System (ADS)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  9. Analysis of light incident location and detector position in early diagnosis of knee osteoarthritis by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling

    2017-01-01

    Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.

  10. TiOx deposited by magnetron sputtering: a joint modelling and experimental study

    NASA Astrophysics Data System (ADS)

    Tonneau, R.; Moskovkin, P.; Pflug, A.; Lucas, S.

    2018-05-01

    This paper presents a 3D multiscale simulation approach to model magnetron reactive sputter deposition of TiOx⩽2 at various O2 inlets and its validation against experimental results. The simulation first involves the transport of sputtered material in a vacuum chamber by means of a three-dimensional direct simulation Monte Carlo (DSMC) technique. Second, the film growth at different positions on a 3D substrate is simulated using a kinetic Monte Carlo (kMC) method. When simulating the transport of species in the chamber, wall chemistry reactions are taken into account in order to get the proper content of the reactive species in the volume. Angular and energy distributions of particles are extracted from DSMC and used for film growth modelling by kMC. Along with the simulation, experimental deposition of TiOx coatings on silicon samples placed at different positions on a curved sample holder was performed. The experimental results are in agreement with the simulated ones. For a given coater, the plasma phase hysteresis behaviour, film composition and film morphology are predicted. The used methodology can be applied to any coater and any films. This paves the way to the elaboration of a virtual coater allowing a user to predict composition and morphology of films deposited in silico.

  11. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  12. SU-E-T-404: Evaluation of the Effect of Spine Hardware for CyberKnife Spinal Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, J; Zhang, Y; Zheng, Y

    2015-06-15

    Purpose: Spine hardware made of high-Z materials such as titanium has the potential to affect the dose distribution around the metal rods in CyberKnife spinal stereotactic radiosurgery (SRS) treatments. The purpose of this work was to evaluate the magnitude of such effect retrospectively for clinical CyberKnife plans. Methods: The dose calculation was performed within the MultiPlan treatment planning system using the ray tracing (RT) and Monte Carlo (MC) method. A custom density model was created by extending the CT-to-Density table to titanium density of 4.5 g/cm3 with the CT number of 4095. To understand the dose perturbation caused by themore » titanium rod, a simple beam setup (7.5 mm IRIS collimator) was used to irradiate a mimic rod (5 mm) with overridden high density. Five patient spinal SRS cases were found chronologically from 2010 to 2015 in our institution. For each case, the hardware was contoured manually. The original plan was re-calculated using both RT and MC methods with and without rod density override without changing clinical beam parameters. Results: The simple beam irradiation shows that there is 10% dose increase at the interface because of electron backscattering and 7% decrease behind the rod because of photon attenuation. For actual clinical plans, the iso-dose lines and DVHs are almost identical (<2%) for calculations with and without density override for both RT and MC methods. However, there is a difference of more than 10% for D90 between RT and MC method. Conclusion: Although the dose perturbation around the metal rods can be as large as 10% for a single beam irradiation, for clinical treatments with complex beam composition the effect of spinal hardware to the PTV and spinal dose is minimal. As such, the MC dose algorithm without rod density override for CyberKnife spinal SRS is acceptable.« less

  13. Application of the Monte Carlo method to estimate doses due to neutron activation of different materials in a nuclear reactor

    NASA Astrophysics Data System (ADS)

    Ródenas, José

    2017-11-01

    All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.

  14. Improved Hybrid Modeling of Spent Fuel Storage Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibber, Karl van

    This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less

  15. Quantum Monte Carlo Simulation of condensed van der Waals Systems

    NASA Astrophysics Data System (ADS)

    Benali, Anouar; Shulenburger, Luke; Romero, Nichols A.; Kim, Jeongnim; Anatole von Lilienfeld, O.

    2012-02-01

    Van der Waals forces are as ubiquitous as infamous. While post-Hartree-Fock methods enable accurate estimates of these forces in molecules and clusters, they remain elusive for dealing with many-electron condensed phase systems. We present Quantum Monte Carlo [1,2] results for condensed van der Waals systems. Interatomic many-body contributions to cohesive energies and bulk modulus will be discussed. Numerical evidence is presented for crystals of rare gas atoms, and compared to experiments and methods [3]. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's National Nuclear Security Administration under Contract No. DE-AC04-94AL85000.[4pt] [1] J. Kim, K. Esler, J. McMinis and D. Ceperley, SciDAC 2010, J. of Physics: Conference series, Chattanooga, Tennessee, July 11 2011 [0pt] [2] QMCPACK simulation suite, http://qmcpack.cmscc.org (unpublished)[0pt] [3] O. A. von Lillienfeld and A. Tkatchenko, J. Chem. Phys. 132 234109 (2010)

  16. Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  18. Dosimetric comparison of Acuros XB deterministic radiation transport method with Monte Carlo and model-based convolution methods in heterogeneous media

    PubMed Central

    Han, Tao; Mikell, Justin K.; Salehpour, Mohammad; Mourtada, Firas

    2011-01-01

    Purpose: The deterministic Acuros XB (AXB) algorithm was recently implemented in the Eclipse treatment planning system. The goal of this study was to compare AXB performance to Monte Carlo (MC) and two standard clinical convolution methods: the anisotropic analytical algorithm (AAA) and the collapsed-cone convolution (CCC) method. Methods: Homogeneous water and multilayer slab virtual phantoms were used for this study. The multilayer slab phantom had three different materials, representing soft tissue, bone, and lung. Depth dose and lateral dose profiles from AXB v10 in Eclipse were compared to AAA v10 in Eclipse, CCC in Pinnacle3, and EGSnrc MC simulations for 6 and 18 MV photon beams with open fields for both phantoms. In order to further reveal the dosimetric differences between AXB and AAA or CCC, three-dimensional (3D) gamma index analyses were conducted in slab regions and subregions defined by AAPM Task Group 53. Results: The AXB calculations were found to be closer to MC than both AAA and CCC for all the investigated plans, especially in bone and lung regions. The average differences of depth dose profiles between MC and AXB, AAA, or CCC was within 1.1, 4.4, and 2.2%, respectively, for all fields and energies. More specifically, those differences in bone region were up to 1.1, 6.4, and 1.6%; in lung region were up to 0.9, 11.6, and 4.5% for AXB, AAA, and CCC, respectively. AXB was also found to have better dose predictions than AAA and CCC at the tissue interfaces where backscatter occurs. 3D gamma index analyses (percent of dose voxels passing a 2%∕2 mm criterion) showed that the dose differences between AAA and AXB are significant (under 60% passed) in the bone region for all field sizes of 6 MV and in the lung region for most of field sizes of both energies. The difference between AXB and CCC was generally small (over 90% passed) except in the lung region for 18 MV 10 × 10 cm2 fields (over 26% passed) and in the bone region for 5 × 5 and 10 × 10 cm2 fields (over 64% passed). With the criterion relaxed to 5%∕2 mm, the pass rates were over 90% for both AAA and CCC relative to AXB for all energies and fields, with the exception of AAA 18 MV 2.5 × 2.5 cm2 field, which still did not pass. Conclusions: In heterogeneous media, AXB dose prediction ability appears to be comparable to MC and superior to current clinical convolution methods. The dose differences between AXB and AAA or CCC are mainly in the bone, lung, and interface regions. The spatial distributions of these differences depend on the field sizes and energies. PMID:21776802

  19. Pinhole X-ray fluorescence imaging of gadolinium and gold nanoparticles using polychromatic X-rays: a Monte Carlo study

    PubMed Central

    Jung, Seongmoon; Sung, Wonmo; Ye, Sung-Joon

    2017-01-01

    This work aims to develop a Monte Carlo (MC) model for pinhole K-shell X-ray fluorescence (XRF) imaging of metal nanoparticles using polychromatic X-rays. The MC model consisted of two-dimensional (2D) position-sensitive detectors and fan-beam X-rays used to stimulate the emission of XRF photons from gadolinium (Gd) or gold (Au) nanoparticles. Four cylindrical columns containing different concentrations of nanoparticles ranging from 0.01% to 0.09% by weight (wt%) were placed in a 5 cm diameter cylindrical water phantom. The images of the columns had detectable contrast-to-noise ratios (CNRs) of 5.7 and 4.3 for 0.01 wt% Gd and for 0.03 wt% Au, respectively. Higher concentrations of nanoparticles yielded higher CNR. For 1×1011 incident particles, the radiation dose to the phantom was 19.9 mGy for 110 kVp X-rays (Gd imaging) and 26.1 mGy for 140 kVp X-rays (Au imaging). The MC model of a pinhole XRF can acquire direct 2D slice images of the object without image reconstruction. The MC model demonstrated that the pinhole XRF imaging system could be a potential bioimaging modality for nanomedicine. PMID:28860750

  20. Theoretical study of the ammonia nitridation rate on an Fe (100) surface: a combined density functional theory and kinetic Monte Carlo study.

    PubMed

    Yeo, Sang Chul; Lo, Yu Chieh; Li, Ju; Lee, Hyuck Mo

    2014-10-07

    Ammonia (NH3) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (Eb) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (Eb) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH3 nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH3 nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH3 nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH3 nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.

  1. The Ultimate Monte Carlo: Studying Cross-Sections With Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Wilson, Thomas L.

    2007-01-01

    The high-energy physics community has been discussing for years the need to bring together the three principal disciplines that study hadron cross-section physics - ground-based accelerators, cosmic-ray experiments in space, and air shower research. Only recently have NASA investigators begun discussing the use of space-borne cosmic-ray payloads to bridge the gap between accelerator physics and air shower work using cosmic-ray measurements. The common tool used in these three realms of high-energy hadron physics is the Monte Carlo (MC). Yet the obvious has not been considered - using a single MC for simulating the entire relativistic energy range (GeV to EeV). The task is daunting due to large uncertainties in accelerator, space, and atmospheric cascade measurements. These include inclusive versus exclusive cross-section measurements, primary composition, interaction dynamics, and possible new physics beyond the standard model. However, the discussion of a common tool or ultimate MC might be the very thing that could begin to unify these independent groups into a common purpose. The Offline ALICE concept of a Virtual MC at CERN s Large Hadron Collider (LHC) will be discussed as a rudimentary beginning of this idea, and as a possible forum for carrying it forward in the future as LHC data emerges.

  2. SU-F-BRD-16: Under Dose Regions Recalculated by Monte Carlo Cannot Predict the Local Failure for NSCLC Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H; Cherian, S; Stephans, K

    2014-06-15

    Purpose: To investigate whether Monte Carlo (MC) recalculated dose distributions can predict the geometric location of the recurrence for nonsmall cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT). Methods: Thirty NSCLC patients with local recurrence were retrospectively selected for this study. The recurred gross target volumes (rGTV) were delineated on the follow-up CT/PET images and then rigidly transferred via imaging fusion to the original planning CTs. Failure pattern was defined according to the overlap between the rGTV and planning GTV (pGTV) as: (a) in-field failure (≥80%), (b) marginal failure (20%–80%), and (c) out-of-field failure (≤20%). All clinicalmore » plans were calculated initially with pencil beam (PB) with or without heterogeneity correction dependent of protocols. These plans were recalculated with MC with heterogeneity correction. Because of non-uniform dose distributions in the rGTVs, the rGTVs were further divided into four regions: inside the pGTV (GTVin), inside the PTV (PTVin), outside the pGTV (GTVout), and outside the PTV (PTVout). The mean doses to these regions were reported and analyzed separately. Results: Among 30 patients, 10 patients had infield recurrences, 15 marginal and 5 out-of-field failures. With MC calculations, D95 and D99 of the PTV were reduced by (10.6 ± 7.4)% and (11.7 ± 7.9)%. The average MC calculated mean doses of GTVin, GTVout, PTVin and PTVout were 48.2 ± 5.3 Gy, 48.2 ± 5.5 Gy, 46.3 ± 6.2 Gy and 46.6 ± 5.6 Gy, respectively. No significant dose differences between GTVin and GTVout (p=0.65), PTVin and PTVout (p=0.19) were observed, using the paired students t-test. Conclusion: Although the PB calculations underestimated the tumor target doses, the geometric location of the recurrence did not correlate with the mean doses of subsections of the recurrent GTV. Under dose regions recalculated by MC cannot predict the local failure for NSCLC patients treated with SBRT.« less

  3. Monte Carlo-based QA for IMRT of head and neck cancers

    NASA Astrophysics Data System (ADS)

    Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.

    2007-06-01

    It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal mucosa for some treatments with heavily-weighted anterior fields.

  4. SU-G-IeP2-04: Dosimetric Accuracy of a Monte Carlo-Based Tool for Cone-Beam CT Organ Dose Calculation: Validation Against OSL and XRQA2 Film Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chesneau, H; Lazaro, D; Blideanu, V

    Purpose: The intensive use of Cone-Beam Computed Tomography (CBCT) during radiotherapy treatments raise some questions about the dose to healthy tissues delivered during image acquisitions. We hence developed a Monte Carlo (MC)-based tool to predict doses to organs delivered by the Elekta XVI kV-CBCT. This work aims at assessing the dosimetric accuracy of the MC tool, in all tissue types. Methods: The kV-CBCT MC model was developed using the PENELOPE code. The beam properties were validated against measured lateral and depth dose profiles in water, and energy spectra measured with a CdTe detector. The CBCT simulator accuracy then required verificationmore » in clinical conditions. For this, we compared calculated and experimental dose values obtained with OSL nanoDots and XRQA2 films inserted in CIRS anthropomorphic phantoms (male, female, and 5-year old child). Measurements were performed at different locations, including bone and lung structures, and for several acquisition protocols: lung, head-and-neck, and pelvis. OSLs and film measurements were corrected when possible for energy dependence, by taking into account for spectral variations between calibration and measurement conditions. Results: Comparisons between measured and MC dose values are summarized in table 1. A mean difference of 8.6% was achieved for OSLs when the energy correction was applied, and 89.3% of the 84 dose points were within uncertainty intervals, including those in bones and lungs. Results with XRQA2 are not as good, because incomplete information about electronic equilibrium in film layers hampered the application of a simple energy correction procedure. Furthermore, measured and calculated doses (Fig.1) are in agreement with the literature. Conclusion: The MC-based tool developed was validated with an extensive set of measurements, and enables the organ dose calculation with accuracy. It can now be used to compute and report doses to organs for clinical cases, and also to drive strategies to optimize imaging protocols.« less

  5. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  6. "First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).

    PubMed

    Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert

    2012-03-15

    First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.

  7. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    PubMed

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  8. An interface for simulating radiative transfer in and around volcanic plumes with the Monte Carlo radiative transfer model McArtim

    USGS Publications Warehouse

    Kern, Christoph

    2016-03-23

    This report describes two software tools that, when used as front ends for the three-dimensional backward Monte Carlo atmospheric-radiative-transfer model (RTM) McArtim, facilitate the generation of lookup tables of volcanic-plume optical-transmittance characteristics in the ultraviolet/visible-spectral region. In particular, the differential optical depth and derivatives thereof (that is, weighting functions), with regard to a change in SO2 column density or aerosol optical thickness, can be simulated for a specific measurement geometry and a representative range of plume conditions. These tables are required for the retrieval of SO2 column density in volcanic plumes, using the simulated radiative-transfer/differential optical-absorption spectroscopic (SRT-DOAS) approach outlined by Kern and others (2012). This report, together with the software tools published online, is intended to make this sophisticated SRT-DOAS technique available to volcanologists and gas geochemists in an operational environment, without the need for an indepth treatment of the underlying principles or the low-level interface of the RTM McArtim.

  9. Lens implementation on the GATE Monte Carlo toolkit for optical imaging simulation.

    PubMed

    Kang, Han Gyu; Song, Seong Hyun; Han, Young Been; Kim, Kyeong Min; Hong, Seong Jong

    2018-02-01

    Optical imaging techniques are widely used for in vivo preclinical studies, and it is well known that the Geant4 Application for Emission Tomography (GATE) can be employed for the Monte Carlo (MC) modeling of light transport inside heterogeneous tissues. However, the GATE MC toolkit is limited in that it does not yet include optical lens implementation, even though this is required for a more realistic optical imaging simulation. We describe our implementation of a biconvex lens into the GATE MC toolkit to improve both the sensitivity and spatial resolution for optical imaging simulation. The lens implemented into the GATE was validated against the ZEMAX optical simulation using an US air force 1951 resolution target. The ray diagrams and the charge-coupled device images of the GATE optical simulation agreed with the ZEMAX optical simulation results. In conclusion, the use of a lens on the GATE optical simulation could improve the image quality of bioluminescence and fluorescence significantly as compared with pinhole optics. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. The Monte Carlo simulation of the Borexino detector

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.

    2018-01-01

    We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.

  11. In vitro Dosimetric Study of Biliary Stent Loaded with Radioactive 125I Seeds

    PubMed Central

    Yao, Li-Hong; Wang, Jun-Jie; Shang, Charles; Jiang, Ping; Lin, Lei; Sun, Hai-Tao; Liu, Lu; Liu, Hao; He, Di; Yang, Rui-Jie

    2017-01-01

    Background: A novel radioactive 125I seed-loaded biliary stent has been used for patients with malignant biliary obstruction. However, the dosimetric characteristics of the stents remain unclear. Therefore, we aimed to describe the dosimetry of the stents of different lengths — with different number as well as activities of 125I seeds. Methods: The radiation dosimetry of three representative radioactive stent models was evaluated using a treatment planning system (TPS), thermoluminescent dosimeter (TLD) measurements, and Monte Carlo (MC) simulations. In the process of TPS calculation and TLD measurement, two different water-equivalent phantoms were designed to obtain cumulative radial dose distribution. Calibration procedures using TLD in the designed phantom were also conducted. MC simulations were performed using the Monte Carlo N-Particle eXtended version 2.5 general purpose code to calculate the radioactive stent's three-dimensional dose rate distribution in liquid water. Analysis of covariance was used to examine the factors influencing radial dose distribution of the radioactive stent. Results: The maximum reduction in cumulative radial dose was 26% when the seed activity changed from 0.5 mCi to 0.4 mCi for the same length of radioactive stents. The TLD's dose response in the range of 0–10 mGy irradiation by 137Cs γ-ray was linear: y = 182225x − 6651.9 (R2= 0.99152; y is the irradiation dose in mGy, x is the TLDs’ reading in nC). When TLDs were irradiated by different energy radiation sources to a dose of 1 mGy, reading of TLDs was different. Doses at a distance of 0.1 cm from the three stents’ surface simulated by MC were 79, 93, and 97 Gy. Conclusions: TPS calculation, TLD measurement, and MC simulation were performed and were found to be in good agreement. Although the whole experiment was conducted in water-equivalent phantom, data in our evaluation may provide a theoretical basis for dosimetry for the clinical application. PMID:28469106

  12. SU-E-T-795: Validations of Dose Calculation Accuracy of Acuros BV in High-Dose-Rate (HDR) Brachytherapy with a Shielded Cylinder Applicator Using Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Department of Engineering Physics, Tsinghua University, Beijing; Tian, Z

    Purpose: Acuros BV has become available to perform accurate dose calculations in high-dose-rate (HDR) brachytherapy with phantom heterogeneity considered by solving the Boltzmann transport equation. In this work, we performed validation studies regarding the dose calculation accuracy of Acuros BV in cases with a shielded cylinder applicator using Monte Carlo (MC) simulations. Methods: Fifteen cases were considered in our studies, covering five different diameters of the applicator and three different shielding degrees. For each case, a digital phantom was created in Varian BrachyVision with the cylinder applicator inserted in the middle of a large water phantom. A treatment plan withmore » eight dwell positions was generated for these fifteen cases. Dose calculations were performed with Acuros BV. We then generated a voxelized phantom of the same geometry, and the materials were modeled according to the vendor’s specifications. MC dose calculations were then performed using our in-house developed fast MC dose engine for HDR brachytherapy (gBMC) on a GPU platform, which is able to simulate both photon transport and electron transport in a voxelized geometry. A phase-space file for the Ir-192 HDR source was used as a source model for MC simulations. Results: Satisfactory agreements between the dose distributions calculated by Acuros BV and those calculated by gBMC were observed in all cases. Quantitatively, we computed point-wise dose difference within the region that receives a dose higher than 10% of the reference dose, defined to be the dose at 5mm outward away from the applicator surface. The mean dose difference was ∼0.45%–0.51% and the 95-percentile maximum difference was ∼1.24%–1.47%. Conclusion: Acuros BV is able to accurately perform dose calculations in HDR brachytherapy with a shielded cylinder applicator.« less

  13. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator

    NASA Astrophysics Data System (ADS)

    Suh, Donghyuk; Radak, Brian K.; Chipot, Christophe; Roux, Benoît

    2018-01-01

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  14. Enhanced configurational sampling with hybrid non-equilibrium molecular dynamics-Monte Carlo propagator.

    PubMed

    Suh, Donghyuk; Radak, Brian K; Chipot, Christophe; Roux, Benoît

    2018-01-07

    Molecular dynamics (MD) trajectories based on classical equations of motion can be used to sample the configurational space of complex molecular systems. However, brute-force MD often converges slowly due to the ruggedness of the underlying potential energy surface. Several schemes have been proposed to address this problem by effectively smoothing the potential energy surface. However, in order to recover the proper Boltzmann equilibrium probability distribution, these approaches must then rely on statistical reweighting techniques or generate the simulations within a Hamiltonian tempering replica-exchange scheme. The present work puts forth a novel hybrid sampling propagator combining Metropolis-Hastings Monte Carlo (MC) with proposed moves generated by non-equilibrium MD (neMD). This hybrid neMD-MC propagator comprises three elementary elements: (i) an atomic system is dynamically propagated for some period of time using standard equilibrium MD on the correct potential energy surface; (ii) the system is then propagated for a brief period of time during what is referred to as a "boosting phase," via a time-dependent Hamiltonian that is evolved toward the perturbed potential energy surface and then back to the correct potential energy surface; (iii) the resulting configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-end momentum reversal prescription is used at the end of the neMD trajectories to guarantee that the hybrid neMD-MC sampling propagator obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The hybrid neMD-MC sampling propagator is designed and implemented to enhance the sampling by relying on the accelerated MD and solute tempering schemes. It is also combined with the adaptive biased force sampling algorithm to examine. Illustrative tests with specific biomolecular systems indicate that the method can yield a significant speedup.

  15. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    NASA Astrophysics Data System (ADS)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms, respectively. Comparison with results calculated using proprietary data for the PHILIPS Brilliance 64 scanner showed agreement on average within 2.5% (max. 5.8%) and with data measured for that scanner within 2.1% (max. 3.7%). Ratios of CTDI100,c/CTDI100, p for this study and corresponding data published by CTDosimetry (www.impactscan.org) agree on average within about 11% (max. 28.6%). Lateral dose profiles calculated with the proposed bowtie filter and with proprietary data agreed within 2% (max. 5.9%), and both calculated data agreed within 5.4% (max. 11.2%) with measured results. Application of the proposed bowtie filter and of the exactly modelled filter to human phantom Monte Carlo calculations show agreement on the average within less than 5% (max. 7.9%) for organ and tissue absorbed doses.

  16. Poster - Thur Eve - 45: Commissioning of the Varian ECLIPSE eMC algorithm for clinical electron treatment planning.

    PubMed

    Serban, M; Ruo, R; Sarfehnia, A; Parker, W; Evans, M

    2012-07-01

    Fast electron Monte Carlo systems have been developed commercially, and implemented for clinical practice in radiation therapy clinics. In this work the Varian eMC (electron Monte Carlo) algorithm was commissioned for clinical electron beams of energies between 6 MeV and 20 MeV. Beam outputs, PDDs and profiles were measured for 29 regular and irregular cutouts using the IC-10 (Wellhöfer) ionization chamber. Detailed percentage depth dose comparisons showed that the agreement between measurement and eMC for different characteristic points on the PDD are generally less than 1 mm and always less than 2 mm, with the eMC calculated values being lower than the measured values. Of the 145 measured output factors, 19 cases fail a ±2% agreement but only 8 cases fail a ±3% agreement between calculation and measurement. Comparison of central axis dose distributions for two electron energies (9, and 20 MeV) for a 10 × 10 cm 2 field, centrally shielded with Pb of width 0 cm (open), 1, 2 and 3 cm, shows agreement to within 3% except near the surface. Comparison of central axis dose distributions for 9 MeV in heterogeneous phantoms including bone and lung inserts showed agreement of 1 mm and 3 mm respectively with measured TLD data. The overall agreement between measurement and eMC calculation has enabled us to begin implementing this calculation model for clinical use. © 2012 American Association of Physicists in Medicine.

  17. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  18. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  19. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, K; Bostani, M; McNitt-Gray, M

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate themore » complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not. Funding Support: NIH Grant R01-EB017095; Disclosures - Michael McNitt-Gray: Institutional Research Agreement, Siemens AG; Research Support, Siemens AG; Consultant, Flaherty Sensabaugh Bonasso PLLC; Consultant, Fulbright and Jaworski; Disclosures - Cynthia McCollough: Research Grant, Siemens Healthcare.« less

  20. Grid Block Design Based on Monte Carlo Simulated Dosimetry, the Linear Quadratic and Hug–Kellerer Radiobiological Models

    PubMed Central

    Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.

    2017-01-01

    Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035

  1. SU-C-BRC-06: OpenCL-Based Cross-Platform Monte Carlo Simulation Package for Carbon Ion Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, N; Tian, Z; Pompos, A

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is considered to be the most accurate method for calculation of absorbed dose and fundamental physical quantities related to biological effects in carbon ion therapy. Its long computation time impedes clinical and research applications. We have developed an MC package, goCMC, on parallel processing platforms, aiming at achieving accurate and efficient simulations for carbon therapy. Methods: goCMC was developed under OpenCL framework. It supported transport simulation in voxelized geometry with kinetic energy up to 450 MeV/u. Class II condensed history algorithm was employed for charged particle transport with stopping power computed via Bethe-Bloch equation. Secondarymore » electrons were not transported with their energy locally deposited. Energy straggling and multiple scattering were modeled. Production of secondary charged particles from nuclear interactions was implemented based on cross section and yield data from Geant4. They were transported via the condensed history scheme. goCMC supported scoring various quantities of interest e.g. physical dose, particle fluence, spectrum, linear energy transfer, and positron emitting nuclei. Results: goCMC has been benchmarked against Geant4 with different phantoms and beam energies. For 100 MeV/u, 250 MeV/u and 400 MeV/u beams impinging to a water phantom, range difference was 0.03 mm, 0.20 mm and 0.53 mm, and mean dose difference was 0.47%, 0.72% and 0.79%, respectively. goCMC can run on various computing devices. Depending on the beam energy and voxel size, it took 20∼100 seconds to simulate 10{sup 7} carbons on an AMD Radeon GPU card. The corresponding CPU time for Geant4 with the same setup was 60∼100 hours. Conclusion: We have developed an OpenCL-based cross-platform carbon MC simulation package, goCMC. Its accuracy, efficiency and portability make goCMC attractive for research and clinical applications in carbon therapy.« less

  2. Improving x-ray fluorescence signal for benchtop polychromatic cone-beam x-ray fluorescence computed tomography by incident x-ray spectrum optimization: A Monte Carlo study

    PubMed Central

    Manohar, Nivedh; Jones, Bernard L.; Cho, Sang Hyun

    2014-01-01

    Purpose: To develop an accurate and comprehensive Monte Carlo (MC) model of an experimental benchtop polychromatic cone-beam x-ray fluorescence computed tomography (XFCT) setup and apply this MC model to optimize incident x-ray spectrum for improving production/detection of x-ray fluorescence photons from gold nanoparticles (GNPs). Methods: A detailed MC model, based on an experimental XFCT system, was created using the Monte Carlo N-Particle (MCNP) transport code. The model was validated by comparing MC results including x-ray fluorescence (XRF) and scatter photon spectra with measured data obtained under identical conditions using 105 kVp cone-beam x-rays filtered by either 1 mm of lead (Pb) or 0.9 mm of tin (Sn). After validation, the model was used to investigate the effects of additional filtration of the incident beam with Pb and Sn. Supplementary incident x-ray spectra, representing heavier filtration (Pb: 2 and 3 mm; Sn: 1, 2, and 3 mm) were computationally generated and used with the model to obtain XRF/scatter spectra. Quasimonochromatic incident x-ray spectra (81, 85, 90, 95, and 100 keV with 10 keV full width at half maximum) were also investigated to determine the ideal energy for distinguishing gold XRF signal from the scatter background. Fluorescence signal-to-dose ratio (FSDR) and fluorescence-normalized scan time (FNST) were used as metrics to assess results. Results: Calculated XRF/scatter spectra for 1-mm Pb and 0.9-mm Sn filters matched (r ≥ 0.996) experimental measurements. Calculated spectra representing additional filtration for both filter materials showed that the spectral hardening improved the FSDR at the expense of requiring a much longer FNST. In general, using Sn instead of Pb, at a given filter thickness, allowed an increase of up to 20% in FSDR, more prominent gold XRF peaks, and up to an order of magnitude decrease in FNST. Simulations using quasimonochromatic spectra suggested that increasing source x-ray energy, in the investigated range of 81–100 keV, increased the FSDR up to a factor of 20, compared to 1 mm Pb, and further facilitated separation of gold XRF peaks from the scatter background. Conclusions: A detailed MC model of an experimental benchtop XFCT system has been developed and validated. In exemplary calculations to illustrate the usefulness of this model, it was shown that potential use of quasimonochromatic spectra or judicious choice of filter material/thickness to tailor the spectrum of a polychromatic x-ray source can significantly improve the performance of benchtop XFCT, while considering trade-offs between FSDR and FNST. As demonstrated, the current MC model is a reliable and powerful computational tool that can greatly expedite the further development of a benchtop XFCT system for routine preclinical molecular imaging with GNPs and other metal probes. PMID:25281958

  3. Constant-pH molecular dynamics using stochastic titration

    NASA Astrophysics Data System (ADS)

    Baptista, António M.; Teixeira, Vitor H.; Soares, Cláudio M.

    2002-09-01

    A new method is proposed for performing constant-pH molecular dynamics (MD) simulations, that is, MD simulations where pH is one of the external thermodynamic parameters, like the temperature or the pressure. The protonation state of each titrable site in the solute is allowed to change during a molecular mechanics (MM) MD simulation, the new states being obtained from a combination of continuum electrostatics (CE) calculations and Monte Carlo (MC) simulation of protonation equilibrium. The coupling between the MM/MD and CE/MC algorithms is done in a way that ensures a proper Markov chain, sampling from the intended semigrand canonical distribution. This stochastic titration method is applied to succinic acid, aimed at illustrating the method and examining the choice of its adjustable parameters. The complete titration of succinic acid, using constant-pH MD simulations at different pH values, gives a clear picture of the coupling between the trans/gauche isomerization and the protonation process, making it possible to reconcile some apparently contradictory results of previous studies. The present constant-pH MD method is shown to require a moderate increase of computational cost when compared to the usual MD method.

  4. Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yubin; Yuan, Zhen, E-mail: zhenyuan@umac.mo

    Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects withmore » different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their methods are able to resolve the intrinsic difficulties that occur when quantitative PAT is conducted by combining conventional PAT with the diffusion approximation or with radiation transport modeling.« less

  5. Fast calculation of tissue optical properties using MC and the experimental evaluation for diagnosis of cervical cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Zhou, Xiaoqing; Qin, Zhuanping; Zhao, Huijuan

    2011-02-01

    This article aims at the development of the fast inverse Monte Carlo (MC) simulation for the reconstruction of optical properties (absorption coefficient μs and scattering coefficient μs) of cylindrical tissue, such as a cervix, from the measurement of near infrared diffuse light on frequency domain. Frequency domain information (amplitude and phase) is extracted from the time domain MC with a modified method. To shorten the computation time in reconstruction of optical properties, efficient and fast forward MC has to be achieved. To do this, firstly, databases of the frequency-domain information under a range of μa and μs were pre-built by combining MC simulation with Lambert-Beer's law. Then, a double polynomial model was adopted to quickly obtain the frequency-domain information in any optical properties. Based on the fast forward MC, the optical properties can be quickly obtained in a nonlinear optimization scheme. Reconstruction resulting from simulated data showed that the developed inverse MC method has the advantages in both the reconstruction accuracy and computation time. The relative errors in reconstruction of the μs and μs are less than +/-6% and +/-12% respectively, while another coefficient (μs or μs) is in a fixed value. When both μs and μs are unknown, the relative errors in reconstruction of the reduced scattering coefficient and absorption coefficient are mainly less than +/-10% in range of 45< μs <80 cm-1 and 0.25< a μ <0.55 cm-1. With the rapid reconstruction strategy developed in this article the computation time for reconstructing one set of the optical properties is less than 0.5 second. Endoscopic measurement on two tubular solid phantoms were also carried out to evaluate the system and the inversion scheme. The results demonstrated that less than 20% relative error can be achieved.

  6. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    NASA Astrophysics Data System (ADS)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  7. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  8. Crossover of cation partitioning in olivines: a combination of ab initio and Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Chatterjee, Swastika; Bhattacharyya, Sirshendu; Sengupta, Surajit; Saha-Dasgupta, Tanusri

    2011-04-01

    We report studies based on a combination of ab initio electronic structure and Monte Carlo (MC) technique on the problem of cation partitioning among inequivalent octahedral sites, M1 and M2 in mixed olivines containing Mg2+ and Fe2+ ions. Our MC scheme uses interactions derived out of ab initio, density functional calculations carried out on measured crystal structure data. Our results show that there is no reversal of the preference of Fe for M1 over M2 as a function of temperature. Our findings do not agree with the experimental findings of Redfern et al. (Phys Chem Miner 27:630-637, 2000), but are in agreement with those of Heinemann et al. (Eur J Mineral 18:673-689, 2006) and Morozov et al. (Eur J Mineral 17:495-500, 2005).

  9. Improved cache performance in Monte Carlo transport calculations using energy banding

    NASA Astrophysics Data System (ADS)

    Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.

    2014-04-01

    We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.

  10. Monte Carlo replica-exchange based ensemble docking of protein conformations.

    PubMed

    Zhang, Zhe; Ehmann, Uwe; Zacharias, Martin

    2017-05-01

    A replica-exchange Monte Carlo (REMC) ensemble docking approach has been developed that allows efficient exploration of protein-protein docking geometries. In addition to Monte Carlo steps in translation and orientation of binding partners, possible conformational changes upon binding are included based on Monte Carlo selection of protein conformations stored as ordered pregenerated conformational ensembles. The conformational ensembles of each binding partner protein were generated by three different approaches starting from the unbound partner protein structure with a range spanning a root mean square deviation of 1-2.5 Å with respect to the unbound structure. Because MC sampling is performed to select appropriate partner conformations on the fly the approach is not limited by the number of conformations in the ensemble compared to ensemble docking of each conformer pair in ensemble cross docking. Although only a fraction of generated conformers was in closer agreement with the bound structure the REMC ensemble docking approach achieved improved docking results compared to REMC docking with only the unbound partner structures or using docking energy minimization methods. The approach has significant potential for further improvement in combination with more realistic structural ensembles and better docking scoring functions. Proteins 2017; 85:924-937. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.

    PubMed

    Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.

  12. Carrier trajectory tracking equations for Simple-band Monte Carlo simulation of avalanche multiplication processes

    NASA Astrophysics Data System (ADS)

    Ong, J. S. L.; Charin, C.; Leong, J. H.

    2017-12-01

    Avalanche photodiodes (APDs) with steep electric field gradients generally have low excess noise that arises from carrier multiplication within the internal gain of the devices, and the Monte Carlo (MC) method is among popular device simulation tools for such devices. However, there are few articles relating to carrier trajectory modeling in MC models for such devices. In this work, a set of electric-field-gradient-dependent carrier trajectory tracking equations are developed and used to update the positions of carriers along the path during Simple-band Monte Carlo (SMC) simulations of APDs with non-uniform electric fields. The mean gain and excess noise results obtained from the SMC model employing these equations show good agreement with the results reported for a series of silicon diodes, including a p+n diode with steep electric field gradients. These results confirm the validity and demonstrate the feasibility of the trajectory tracking equations applied in SMC models for simulating mean gain and excess noise in APDs with non-uniform electric fields. Also, the simulation results of mean gain, excess noise, and carrier ionization positions obtained from the SMC model of this work agree well with those of the conventional SMC model employing the concept of a uniform electric field within a carrier free-flight. These results demonstrate that the electric field variation within a carrier free-flight has an insignificant effect on the predicted mean gain and excess noise results. Therefore, both the SMC model of this work and the conventional SMC model can be used to predict the mean gain and excess noise in APDs with highly non-uniform electric fields.

  13. Equation of state and Helmholtz free energy for the atomic system of the repulsive Lennard-Jones particles.

    PubMed

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2017-12-07

    Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6≤T * ≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.

  14. Equation of state and Helmholtz free energy for the atomic system of the repulsive Lennard-Jones particles

    NASA Astrophysics Data System (ADS)

    Mirzaeinia, Ali; Feyzi, Farzaneh; Hashemianzadeh, Seyed Majid

    2017-12-01

    Simple and accurate expressions are presented for the equation of state (EOS) and absolute Helmholtz free energy of a system composed of simple atomic particles interacting through the repulsive Lennard-Jones potential model in the fluid and solid phases. The introduced EOS has 17 and 22 coefficients for fluid and solid phases, respectively, which are regressed to the Monte Carlo (MC) simulation data over the reduced temperature range of 0.6 ≤T*≤6.0 and the packing fraction range of 0.1 ≤ η ≤ 0.72. The average absolute relative percent deviation in fitting the EOS parameters to the MC data is 0.06 and 0.14 for the fluid and solid phases, respectively. The thermodynamic integration method is used to calculate the free energy using the MC simulation results. The Helmholtz free energy of the ideal gas is employed as the reference state for the fluid phase. For the solid phase, the values of the free energy at the reduced density equivalent to the close-packed of a hard sphere are used as the reference state. To check the validity of the predicted values of the Helmholtz free energy, the Widom particle insertion method and the Einstein crystal technique of Frenkel and Ladd are employed. The results obtained from the MC simulation approaches are well agreed to the EOS results, which show that the proposed model can reliably be utilized in the framework of thermodynamic theories.

  15. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework

    PubMed Central

    Dunkerley, David A. P.; Tomkowiak, Michael T.; Slagowski, Jordan M.; McCabe, Bradley P.; Funk, Tobias; Speidel, Michael A.

    2015-01-01

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8–6.4% (18.6–31.5 cm acrylic, 100 kV), versus 2.1–4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems. PMID:26113765

  16. Electron dose distributions caused by the contact-type metallic eye shield: Studies using Monte Carlo and pencil beam algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sei-Kwon; Yoon, Jai-Woong; Hwang, Taejin

    A metallic contact eye shield has sometimes been used for eyelid treatment, but dose distribution has never been reported for a patient case. This study aimed to show the shield-incorporated CT-based dose distribution using the Pinnacle system and Monte Carlo (MC) calculation for 3 patient cases. For the artifact-free CT scan, an acrylic shield machined as the same size as that of the tungsten shield was used. For the MC calculation, BEAMnrc and DOSXYZnrc were used for the 6-MeV electron beam of the Varian 21EX, in which information for the tungsten, stainless steel, and aluminum material for the eye shieldmore » was used. The same plan was generated on the Pinnacle system and both were compared. The use of the acrylic shield produced clear CT images, enabling delineation of the regions of interest, and yielded CT-based dose calculation for the metallic shield. Both the MC and the Pinnacle systems showed a similar dose distribution downstream of the eye shield, reflecting the blocking effect of the metallic eye shield. The major difference between the MC and the Pinnacle results was the target eyelid dose upstream of the shield such that the Pinnacle system underestimated the dose by 19 to 28% and 11 to 18% for the maximum and the mean doses, respectively. The pattern of dose difference between the MC and the Pinnacle systems was similar to that in the previous phantom study. In conclusion, the metallic eye shield was successfully incorporated into the CT-based planning, and the accurate dose calculation requires MC simulation.« less

  17. Development and reproducibility evaluation of a Monte Carlo-based standard LINAC model for quality assurance of multi-institutional clinical trials.

    PubMed

    Usmani, Muhammad Nauman; Takegawa, Hideki; Takashina, Masaaki; Numasaki, Hodaka; Suga, Masaki; Anetai, Yusuke; Kurosu, Keita; Koizumi, Masahiko; Teshima, Teruki

    2014-11-01

    Technical developments in radiotherapy (RT) have created a need for systematic quality assurance (QA) to ensure that clinical institutions deliver prescribed radiation doses consistent with the requirements of clinical protocols. For QA, an ideal dose verification system should be independent of the treatment-planning system (TPS). This paper describes the development and reproducibility evaluation of a Monte Carlo (MC)-based standard LINAC model as a preliminary requirement for independent verification of dose distributions. The BEAMnrc MC code is used for characterization of the 6-, 10- and 15-MV photon beams for a wide range of field sizes. The modeling of the LINAC head components is based on the specifications provided by the manufacturer. MC dose distributions are tuned to match Varian Golden Beam Data (GBD). For reproducibility evaluation, calculated beam data is compared with beam data measured at individual institutions. For all energies and field sizes, the MC and GBD agreed to within 1.0% for percentage depth doses (PDDs), 1.5% for beam profiles and 1.2% for total scatter factors (Scps.). Reproducibility evaluation showed that the maximum average local differences were 1.3% and 2.5% for PDDs and beam profiles, respectively. MC and institutions' mean Scps agreed to within 2.0%. An MC-based standard LINAC model developed to independently verify dose distributions for QA of multi-institutional clinical trials and routine clinical practice has proven to be highly accurate and reproducible and can thus help ensure that prescribed doses delivered are consistent with the requirements of clinical protocols. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  18. Monte Carlo simulation of inverse geometry x-ray fluoroscopy using a modified MC-GPU framework.

    PubMed

    Dunkerley, David A P; Tomkowiak, Michael T; Slagowski, Jordan M; McCabe, Bradley P; Funk, Tobias; Speidel, Michael A

    2015-02-21

    Scanning-Beam Digital X-ray (SBDX) is a technology for low-dose fluoroscopy that employs inverse geometry x-ray beam scanning. To assist with rapid modeling of inverse geometry x-ray systems, we have developed a Monte Carlo (MC) simulation tool based on the MC-GPU framework. MC-GPU version 1.3 was modified to implement a 2D array of focal spot positions on a plane, with individually adjustable x-ray outputs, each producing a narrow x-ray beam directed toward a stationary photon-counting detector array. Geometric accuracy and blurring behavior in tomosynthesis reconstructions were evaluated from simulated images of a 3D arrangement of spheres. The artifact spread function from simulation agreed with experiment to within 1.6% (rRMSD). Detected x-ray scatter fraction was simulated for two SBDX detector geometries and compared to experiments. For the current SBDX prototype (10.6 cm wide by 5.3 cm tall detector), x-ray scatter fraction measured 2.8-6.4% (18.6-31.5 cm acrylic, 100 kV), versus 2.1-4.5% in MC simulation. Experimental trends in scatter versus detector size and phantom thickness were observed in simulation. For dose evaluation, an anthropomorphic phantom was imaged using regular and regional adaptive exposure (RAE) scanning. The reduction in kerma-area-product resulting from RAE scanning was 45% in radiochromic film measurements, versus 46% in simulation. The integral kerma calculated from TLD measurement points within the phantom was 57% lower when using RAE, versus 61% lower in simulation. This MC tool may be used to estimate tomographic blur, detected scatter, and dose distributions when developing inverse geometry x-ray systems.

  19. Validation of a Monte Carlo model used for simulating tube current modulation in computed tomography over a wide range of phantom conditions/challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    2014-11-01

    Purpose: Monte Carlo (MC) simulation methods have been widely used in patient dosimetry in computed tomography (CT), including estimating patient organ doses. However, most simulation methods have undergone a limited set of validations, often using homogeneous phantoms with simple geometries. As clinical scanning has become more complex and the use of tube current modulation (TCM) has become pervasive in the clinic, MC simulations should include these techniques in their methodologies and therefore should also be validated using a variety of phantoms with different shapes and material compositions to result in a variety of differently modulated tube current profiles. The purposemore » of this work is to perform the measurements and simulations to validate a Monte Carlo model under a variety of test conditions where fixed tube current (FTC) and TCM were used. Methods: A previously developed MC model for estimating dose from CT scans that models TCM, built using the platform of MCNPX, was used for CT dose quantification. In order to validate the suitability of this model to accurately simulate patient dose from FTC and TCM CT scan, measurements and simulations were compared over a wide range of conditions. Phantoms used for testing range from simple geometries with homogeneous composition (16 and 32 cm computed tomography dose index phantoms) to more complex phantoms including a rectangular homogeneous water equivalent phantom, an elliptical shaped phantom with three sections (where each section was a homogeneous, but different material), and a heterogeneous, complex geometry anthropomorphic phantom. Each phantom requires varying levels of x-, y- and z-modulation. Each phantom was scanned on a multidetector row CT (Sensation 64) scanner under the conditions of both FTC and TCM. Dose measurements were made at various surface and depth positions within each phantom. Simulations using each phantom were performed for FTC, detailed x–y–z TCM, and z-axis-only TCM to obtain dose estimates. This allowed direct comparisons between measured and simulated dose values under each condition of phantom, location, and scan to be made. Results: For FTC scans, the percent root mean square (RMS) difference between measurements and simulations was within 5% across all phantoms. For TCM scans, the percent RMS of the difference between measured and simulated values when using detailed TCM and z-axis-only TCM simulations was 4.5% and 13.2%, respectively. For the anthropomorphic phantom, the difference between TCM measurements and detailed TCM and z-axis-only TCM simulations was 1.2% and 8.9%, respectively. For FTC measurements and simulations, the percent RMS of the difference was 5.0%. Conclusions: This work demonstrated that the Monte Carlo model developed provided good agreement between measured and simulated values under both simple and complex geometries including an anthropomorphic phantom. This work also showed the increased dose differences for z-axis-only TCM simulations, where considerable modulation in the x–y plane was present due to the shape of the rectangular water phantom. Results from this investigation highlight details that need to be included in Monte Carlo simulations of TCM CT scans in order to yield accurate, clinically viable assessments of patient dosimetry.« less

  20. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations.

    PubMed

    De Vries, Rowen J; Marsh, Steven

    2015-11-08

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.

  1. Monte Carlo simulation tool for online treatment monitoring in hadrontherapy with in-beam PET: A patient study.

    PubMed

    Fiorina, E; Ferrero, V; Pennazio, F; Baroni, G; Battistoni, G; Belcari, N; Cerello, P; Camarlinghi, N; Ciocca, M; Del Guerra, A; Donetti, M; Ferrari, A; Giordanengo, S; Giraudo, G; Mairani, A; Morrocchi, M; Peroni, C; Rivetti, A; Da Rocha Rolo, M D; Rossi, S; Rosso, V; Sala, P; Sportelli, G; Tampellini, S; Valvo, F; Wheadon, R; Bisogni, M G

    2018-05-07

    Hadrontherapy is a method for treating cancer with very targeted dose distributions and enhanced radiobiological effects. To fully exploit these advantages, in vivo range monitoring systems are required. These devices measure, preferably during the treatment, the secondary radiation generated by the beam-tissue interactions. However, since correlation of the secondary radiation distribution with the dose is not straightforward, Monte Carlo (MC) simulations are very important for treatment quality assessment. The INSIDE project constructed an in-beam PET scanner to detect signals generated by the positron-emitting isotopes resulting from projectile-target fragmentation. In addition, a FLUKA-based simulation tool was developed to predict the corresponding reference PET images using a detailed scanner model. The INSIDE in-beam PET was used to monitor two consecutive proton treatment sessions on a patient at the Italian Center for Oncological Hadrontherapy (CNAO). The reconstructed PET images were updated every 10 s providing a near real-time quality assessment. By half-way through the treatment, the statistics of the measured PET images were already significant enough to be compared with the simulations with average differences in the activity range less than 2.5 mm along the beam direction. Without taking into account any preferential direction, differences within 1 mm were found. In this paper, the INSIDE MC simulation tool is described and the results of the first in vivo agreement evaluation are reported. These results have justified a clinical trial, in which the MC simulation tool will be used on a daily basis to study the compliance tolerances between the measured and simulated PET images. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Monte Carlo Neutrino Transport through Remnant Disks from Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Kasen, Daniel; O'Connor, Evan; Fernández, Rodrigo; Ott, Christian D.

    2015-11-01

    We present Sedonu, a new open source, steady-state, special relativistic Monte Carlo (MC) neutrino transport code, available at bitbucket.org/srichers/sedonu. The code calculates the energy- and angle-dependent neutrino distribution function on fluid backgrounds of any number of spatial dimensions, calculates the rates of change of fluid internal energy and electron fraction, and solves for the equilibrium fluid temperature and electron fraction. We apply this method to snapshots from two-dimensional simulations of accretion disks left behind by binary neutron star mergers, varying the input physics and comparing to the results obtained with a leakage scheme for the cases of a central black hole and a central hypermassive neutron star. Neutrinos are guided away from the densest regions of the disk and escape preferentially around 45° from the equatorial plane. Neutrino heating is strengthened by MC transport a few scale heights above the disk midplane near the innermost stable circular orbit, potentially leading to a stronger neutrino-driven wind. Neutrino cooling in the dense midplane of the disk is stronger when using MC transport, leading to a globally higher cooling rate by a factor of a few and a larger leptonization rate by an order of magnitude. We calculate neutrino pair annihilation rates and estimate that an energy of 2.8 × 1046 erg is deposited within 45° of the symmetry axis over 300 ms when a central BH is present. Similarly, 1.9 × 1048 erg is deposited over 3 s when an HMNS sits at the center, but neither estimate is likely to be sufficient to drive a gamma-ray burst jet.

  3. Evaluation of backscatter dose from internal lead shielding in clinical electron beams using EGSnrc Monte Carlo simulations

    PubMed Central

    Marsh, Steven

    2015-01-01

    Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566

  4. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less

  5. Jet energy calibration at the LHC

    DOE PAGES

    Schwartzman, Ariel

    2015-11-10

    In this study, jets are one of the most prominent physics signatures of high energy proton–proton (p–p) collisions at the Large Hadron Collider (LHC). They are key physics objects for precision measurements and searches for new phenomena. This review provides an overview of the reconstruction and calibration of jets at the LHC during its first Run. ATLAS and CMS developed different approaches for the reconstruction of jets, but use similar methods for the energy calibration. ATLAS reconstructs jets utilizing input signals from their calorimeters and use charged particle tracks to refine their energy measurement and suppress the effects of multiplemore » p–p interactions ( pileup). CMS, instead, combines calorimeter and tracking information to build jets from particle flow objects. Jets are calibrated using Monte Carlo (MC) simulations and a residual in situ calibration derived from collision data is applied to correct for the differences in jet response between data and Monte Carlo.« less

  6. Build-up and surface dose measurements on phantoms using micro-MOSFET in 6 and 10 MV x-ray beams and comparisons with Monte Carlo calculations.

    PubMed

    Xiang, Hong F; Song, Jun S; Chin, David W H; Cormack, Robert A; Tishler, Roy B; Makrigiorgos, G Mike; Court, Laurence E; Chin, Lee M

    2007-04-01

    This work is intended to investigate the application and accuracy of micro-MOSFET for superficial dose measurement under clinically used MV x-ray beams. Dose response of micro-MOSFET in the build-up region and on surface under MV x-ray beams were measured and compared to Monte Carlo calculations. First, percentage-depth-doses were measured with micro-MOSFET under 6 and 10 MV beams of normal incidence onto a flat solid water phantom. Micro-MOSFET data were compared with the measurements from a parallel plate ionization chamber and Monte Carlo dose calculation in the build-up region. Then, percentage-depth-doses were measured for oblique beams at 0 degrees-80 degrees onto the flat solid water phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm below the surface. Measurements were compared to Monte Carlo calculations under these settings. Finally, measurements were performed with micro-MOSFET embedded in the first 1 mm layer of bolus placed on a flat phantom and a curved phantom of semi-cylindrical shape. Results were compared to superficial dose calculated from Monte Carlo for a 2 mm thin layer that extends from the surface to a depth of 2 mm. Results were (1) Comparison of measurements with MC calculation in the build-up region showed that micro-MOSFET has a water-equivalence thickness (WET) of 0.87 mm for 6 MV beam and 0.99 mm for 10 MV beam from the flat side, and a WET of 0.72 mm for 6 MV beam and 0.76 mm for 10 MV beam from the epoxy side. (2) For normal beam incidences, percentage depth dose agree within 3%-5% among micro-MOSFET measurements, parallel-plate ionization chamber measurements, and MC calculations. (3) For oblique incidence on the flat phantom with micro-MOSFET placed at depths of 2 cm, 1 cm, and 2 mm, measurements were consistent with MC calculations within a typical uncertainty of 3%-5%. (4) For oblique incidence on the flat phantom and a curved-surface phantom, measurements with micro-MOSFET placed at 1.0 mm agrees with the MC calculation within 6%, including uncertainties of micro-MOSFET measurements of 2%-3% (1 standard deviation), MOSFET angular dependence of 3.0%-3.5%, and 1%-2% systematical error due to phantom setup geometry asymmetry. Micro-MOSFET can be used for skin dose measurements in 6 and 10 MV beams with an estimated accuracy of +/- 6%.

  7. Development of virtual patient models for permanent implant brachytherapy Monte Carlo dose calculations: interdependence of CT image artifact mitigation and tissue assignment.

    PubMed

    Miksys, N; Xu, C; Beaulieu, L; Thomson, R M

    2015-08-07

    This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.

  8. Dosimetric response of variable-size cavities in photon-irradiated media and the behaviour of the Spencer-Attix cavity integral with increasing Δ.

    PubMed

    Kumar, Sudhir; Deshpande, Deepak D; Nahum, Alan E

    2016-04-07

    Cavity theory is fundamental to understanding and predicting dosimeter response. Conventional cavity theories have been shown to be consistent with one another by deriving the electron (+positron) and photon fluence spectra with the FLURZnrc user-code (EGSnrc Monte-Carlo system) in large volumes under quasi-CPE for photon beams of 1 MeV and 10 MeV in three materials (water, aluminium and copper) and then using these fluence spectra to evaluate and then inter-compare the Bragg-Gray, Spencer-Attix and 'large photon' 'cavity integrals'. The behaviour of the 'Spencer-Attix dose' (aka restricted cema), D S-A(▵), in a 1-MeV photon field in water has been investigated for a wide range of values of the cavity-size parameter ▵: D S-A(▵) decreases far below the Monte-Carlo dose (D MC) for ▵ greater than  ≈  30 keV due to secondary electrons with starting energies below ▵ not being 'counted'. We show that for a quasi-scatter-free geometry (D S-A(▵)/D MC) is closely equal to the proportion of energy transferred to Compton electrons with initial (kinetic) energies above ▵, derived from the Klein-Nishina (K-N) differential cross section. (D S-A(▵)/D MC) can be used to estimate the maximum size of a detector behaving as a Bragg-Gray cavity in a photon-irradiated medium as a function of photon-beam quality (under quasi CPE) e.g. a typical air-filled ion chamber is 'Bragg-Gray' at (monoenergetic) beam energies  ⩾260 keV. Finally, by varying the density of a silicon cavity (of 2.26 mm diameter and 2.0 mm thickness) in water, the response of different cavity 'sizes' was simulated; the Monte-Carlo-derived ratio D w/D Si for 6 MV and 15 MV photons varied from very close to the Spencer-Attix value at 'gas' densities, agreed well with Burlin cavity theory as ρ increased, and approached large photon behaviour for ρ  ≈  10 g cm(-3). The estimate of ▵ for the Si cavity was improved by incorporating a Monte-Carlo-derived correction for electron 'detours'. Excellent agreement was obtained between the Burlin 'd' factor for the Si cavity and D S-A(▵)/D MC at different (detour-corrected) ▵, thereby suggesting a further application for the D S-A(▵)/D MC ratio.

  9. Photon beam dosimetry with EBT3 film in heterogeneous regions: Application to the evaluation of dose-calculation algorithms

    NASA Astrophysics Data System (ADS)

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Byungdo; Cheong, Kwang-Ho

    2014-12-01

    For a better understanding of the accuracy of state-of-the-art-radiation therapies, 2-dimensional dosimetry in a patient-like environment will be helpful. Therefore, the dosimetry of EBT3 films in non-water-equivalent tissues was investigated, and the accuracy of commercially-used dose-calculation algorithms was evaluated with EBT3 measurement. Dose distributions were measured with EBT3 films for an in-house-designed phantom that contained a lung or a bone substitute, i.e., an air cavity (3 × 3 × 3 cm3) or teflon (2 × 2 × 2 cm3 or 3 × 3 × 3 cm3), respectively. The phantom was irradiated with 6-MV X-rays with field sizes of 2 × 2, 3 × 3, and 5 × 5 cm2. The accuracy of EBT3 dosimetry was evaluated by comparing the measured dose with the dose obtained from Monte Carlo (MC) simulations. A dose-to-bone-equivalent material was obtained by multiplying the EBT3 measurements by the stopping power ratio (SPR). The EBT3 measurements were then compared with the predictions from four algorithms: Monte Carlo (MC) in iPlan, acuros XB (AXB), analytical anisotropic algorithm (AAA) in Eclipse, and superposition-convolution (SC) in Pinnacle. For the air cavity, the EBT3 measurements agreed with the MC calculation to within 2% on average. For teflon, the EBT3 measurements differed by 9.297% (±0.9229%) on average from the Monte Carlo calculation before dose conversion, and by 0.717% (±0.6546%) after applying the SPR. The doses calculated by using the MC, AXB, AAA, and SC algorithms for the air cavity differed from the EBT3 measurements on average by 2.174, 2.863, 18.01, and 8.391%, respectively; for teflon, the average differences were 3.447, 4.113, 7.589, and 5.102%. The EBT3 measurements corrected with the SPR agreed with 2% on average both within and beyond the heterogeneities with MC results, thereby indicating that EBT3 dosimetry can be used in heterogeneous media. The MC and the AXB dose calculation algorithms exhibited clinically-acceptable accuracy (<5%) in heterogeneities.

  10. Instrumental resolution of the chopper spectrometer 4SEASONS evaluated by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kajimoto, Ryoichi; Sato, Kentaro; Inamura, Yasuhiro; Fujita, Masaki

    2018-05-01

    We performed simulations of the resolution function of the 4SEASONS spectrometer at J-PARC by using the Monte Carlo simulation package McStas. The simulations showed reasonably good agreement with analytical calculations of energy and momentum resolutions by using a simplified description. We implemented new functionalities in Utsusemi, the standard data analysis tool used in 4SEASONS, to enable visualization of the simulated resolution function and predict its shape for specific experimental configurations.

  11. Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4

    2015-01-15

    Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less

  12. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    NASA Astrophysics Data System (ADS)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  13. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  14. Episcleral eye plaque dosimetry comparison for the Eye Physics EP917 using Plaque Simulator and Monte Carlo simulation

    PubMed Central

    Amoush, Ahmad; Wilkinson, Douglas A.

    2015-01-01

    This work is a comparative study of the dosimetry calculated by Plaque Simulator, a treatment planning system for eye plaque brachytherapy, to the dosimetry calculated using Monte Carlo simulation for an Eye Physics model EP917 eye plaque. Monte Carlo (MC) simulation using MCNPX 2.7 was used to calculate the central axis dose in water for an EP917 eye plaque fully loaded with 17 IsoAid Advantage  125I seeds. In addition, the dosimetry parameters Λ, gL(r), and F(r,θ) were calculated for the IsoAid Advantage model IAI‐125  125I seed and benchmarked against published data. Bebig Plaque Simulator (PS) v5.74 was used to calculate the central axis dose based on the AAPM Updated Task Group 43 (TG‐43U1) dose formalism. The calculated central axis dose from MC and PS was then compared. When the MC dosimetry parameters for the IsoAid Advantage  125I seed were compared with the consensus values, Λ agreed with the consensus value to within 2.3%. However, much larger differences were found between MC calculated gL(r) and F(r,θ) and the consensus values. The differences between MC‐calculated dosimetry parameters are much smaller when compared with recently published data. The differences between the calculated central axis absolute dose from MC and PS ranged from 5% to 10% for distances between 1 and 12 mm from the outer scleral surface. When the dosimetry parameters for the  125I seed from this study were used in PS, the calculated absolute central axis dose differences were reduced by 2.3% from depths of 4 to 12 mm from the outer scleral surface. We conclude that PS adequately models the central dose profile of this plaque using its defaults for the IsoAid model IAI‐125 at distances of 1 to 7 mm from the outer scleral surface. However, improved dose accuracy can be obtained by using updated dosimetry parameters for the IsoAid model IAI‐125  125I seed. PACS number: 87.55.K‐ PMID:26699577

  15. Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Pino, Francisco; Silva-Rodríguez, Jesús

    2014-03-15

    Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approachesmore » on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the projection noise than MC-SRM. Conclusions: The authors' findings show that both approaches provide good solutions to the problem of calculating the SRM in pinhole SPECT reconstruction. The AE-SRM was faster to create and handle the projection noise better than MC-SRM. Nevertheless, the AE-SRM required a tedious experimental characterization of the intrinsic detector response. Creation of the MC-SRM required longer computation time and handled the projection noise worse than the AE-SRM. Nevertheless, the MC-SRM inherently incorporates extensive modeling of the system and therefore experimental characterization was not required.« less

  16. MO-G-17A-06: Kernel Based Dosimetry for 90Y Microsphere Liver Therapy Using 90Y Bremsstrahlung SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikell, J; Siman, W; Kappadath, S

    2014-06-15

    Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  17. [Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].

    PubMed

    Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang

    2016-02-01

    Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS

  18. Estimating the Standard Error of Robust Regression Estimates.

    DTIC Science & Technology

    1987-03-01

    error is 0(n4/5). In another Monte Carlo study, McKean and Schrader (1984) found that the tests resulting from studentizing ; by _3d/1/2 with d =0(n4 /5...44 4 -:~~-~*v: -. *;~ ~ ~*t .~ # ~ 44 % * ~ .%j % % % * . ., ~ -%. -14- Sheather, S. J. and McKean, J. W. (1987). A comparison of testing and...Wiley, New York. Welsch, R. E. (1980). Regression Sensitivity Analysis and Bounded- Influence Estimation, in Evaluation of Econometric Models eds. J

  19. Theoretical study of the ammonia nitridation rate on an Fe (100) surface: A combined density functional theory and kinetic Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeo, Sang Chul; Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr; Lo, Yu Chieh

    2014-10-07

    Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate onmore » the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.« less

  20. Kinetic Monte Carlo simulation of the efficiency roll-off, emission color, and degradation of organic light-emitting diodes (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Coehoorn, Reinder; van Eersel, Harm; Bobbert, Peter A.; Janssen, Rene A. J.

    2015-10-01

    The performance of Organic Light Emitting Diodes (OLEDs) is determined by a complex interplay of the charge transport and excitonic processes in the active layer stack. We have developed a three-dimensional kinetic Monte Carlo (kMC) OLED simulation method which includes all these processes in an integral manner. The method employs a physically transparent mechanistic approach, and is based on measurable parameters. All processes can be followed with molecular-scale spatial resolution and with sub-nanosecond time resolution, for any layer structure and any mixture of materials. In the talk, applications to the efficiency roll-off, emission color and lifetime of white and monochrome phosphorescent OLEDs [1,2] are demonstrated, and a comparison with experimental results is given. The simulations show to which extent the triplet-polaron quenching (TPQ) and triplet-triplet-annihilation (TTA) contribute to the roll-off, and how the microscopic parameters describing these processes can be deduced properly from dedicated experiments. Degradation is treated as a result of the (accelerated) conversion of emitter molecules to non-emissive sites upon a triplet-polaron quenching (TPQ) process. The degradation rate, and hence the device lifetime, is shown to depend on the emitter concentration and on the precise type of TPQ process. Results for both single-doped and co-doped OLEDs are presented, revealing that the kMC simulations enable efficient simulation-assisted layer stack development. [1] H. van Eersel et al., Appl. Phys. Lett. 105, 143303 (2014). [2] R. Coehoorn et al., Adv. Funct. Mater. (2015), publ. online (DOI: 10.1002/adfm.201402532)

  1. McStas-model of the delft SESANS

    NASA Astrophysics Data System (ADS)

    Knudsen, E.; Udby, L.; Willendrup, P. K.; Lefmann, K.; Bouwman, W. G.

    2011-06-01

    We present simulation results taking first virtual data from a model of the Spin-Echo Small Angle Scattering (SESANS) instrument situated in Delft, in the framework of the McStas Monte Carlo software package. The main focus has been on making a model of the Delft SESANS instrument, and we can now present the first virtual data from it, using a refracting prism-like sample model. In consequence, polarisation instrumentation is now included natively in the McStas kernel, including options for magnetic fields and a number of utility components. This development has brought us to a point where realistic models of polarisation-enabled instrumentation can be built.

  2. A flexible Monte Carlo tool for patient or phantom specific calculations: comparison with preliminary validation measurements

    NASA Astrophysics Data System (ADS)

    Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.

    2008-02-01

    The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).

  3. Monte Carlo simulations to replace film dosimetry in IMRT verification.

    PubMed

    Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig

    2011-01-01

    Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.

  4. Technical Note: Defining cyclotron-based clinical scanning proton machines in a FLUKA Monte Carlo system.

    PubMed

    Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank

    2018-02-01

    Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Systematic discrepancies in Monte Carlo predictions of k-ratios emitted from thin films on substrates

    NASA Astrophysics Data System (ADS)

    Statham, P.; Llovet, X.; Duncumb, P.

    2012-03-01

    We have assessed the reliability of different Monte Carlo simulation programmes using the two available Bastin-Heijligers databases of thin-film measurements by EPMA. The MC simulation programmes tested include Curgenven-Duncumb MSMC, NISTMonte, Casino and PENELOPE. Plots of the ratio of calculated to measured k-ratios ("kcalc/kmeas") against various parameters reveal error trends that are not apparent in simple error histograms. The results indicate that the MC programmes perform quite differently on the same dataset. However, they appear to show a similar pronounced trend with a "hockey stick" shape in the "kcalc/kmeas versus kmeas" plots. The most sophisticated programme PENELOPE gives the closest correspondence with experiment but still shows a tendency to underestimate experimental k-ratios by 10 % for films that are thin compared to the electron range. We have investigated potential causes for this systematic behaviour and extended the study to data not collected by Bastin and Heijligers.

  6. Kinetic Monte Carlo study of vinyl acetate synthesis from ethylene acetoxylation on Pd(100) and Pd/Au(100)

    NASA Astrophysics Data System (ADS)

    Huang, Yanping; Dong, Xiuqin; Yu, Yingzhe; Zhang, Minhua

    2017-11-01

    On the basis of the activation barriers and reaction energies from DFT calculations, kinetic Monte Carlo (kMC) simulations of vinyl acetate (VA) synthesis from ethylene acetoxylation on Pd(100) and Pd/Au(100) were carried out. Through kMC simulation, it was found that VA synthesis from ethylene acetoxylation proceeds via Moiseev mechanism on both Pd(100) and Pd/Au(100). The addition of Au into Pd can suppress ethylene dehydrogenation while it can promote acetic acid dehydrogenation, which can eventually facilitate VA synthesis as a whole. The addition of Au into Pd can further improve the conversion and selectivity of VA synthesis from ethylene acetoxylation. When the reaction network is analyzed, besides the energetics of each elementary reaction, the surface coverage of each species and the occupancy of the surface sites on the catalyst should also be taken into consideration.

  7. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross

    2016-06-15

    Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less

  8. A low-rank control variate for multilevel Monte Carlo simulation of high-dimensional uncertain systems

    NASA Astrophysics Data System (ADS)

    Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca

    2017-07-01

    Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.

  9. Modeling of Diffuse-Diffuse Photon Coupling via a Nonscattering Region: a Comparative Study

    NASA Astrophysics Data System (ADS)

    Lee, Jae Hoon; Kim, Seunghwan; Kim, Youn Tae

    2004-06-01

    It is well established that diffusion approximation is valid for light propagation in highly scattering media, but it breaks down in nonscattering regions. The previous methods that manipulate nonscattering regions are essentially boundary-to-boundary coupling (BBC) methods through a nonscattering void region based on the radiosity theory. We present a boundary-to-interior coupling (BIC) method. BIC is based on the fact that the collimated pencil beam incident on the medium can be replaced by an isotropic point source positioned at one reduced scattering length inside the medium from an illuminated point. A similar replacement is possible for the nondiffuse lights that enter the diffuse medium through the void, and it is formulated as the BIC method. We implemented both coupling methods using the finite element method (FEM) and tested for the circle with a void gap and for a four-layer adult head model. For mean time of flight, the BIC shows better agreement with Monte Carlo (MC) simulation results than BBC. For intensity, BIC shows a comparable match with MC data compared with that of BBC. The effect of absorption of the clear layer in the adult head model was investigated. Both mean time and intensity decrease as absorption of the clear layer increases.

  10. Modeling of diffuse-diffuse photon coupling via a nonscattering region: a comparative study.

    PubMed

    Lee, Jae Hoon; Kim, Seunghwan; Kim, Youn Tae

    2004-06-20

    It is well established that diffusion approximation is valid for light propagation in highly scattering media, but it breaks down in nonscattering regions. The previous methods that manipulate nonscattering regions are essentially boundary-to-boundary coupling (BBC) methods through a nonscattering void region based on the radiosity theory. We present a boundary-to-interior coupling (BIC) method. BIC is based on the fact that the collimated pencil beam incident on the medium can be replaced by an isotropic point source positioned at one reduced scattering length inside the medium from an illuminated point. A similar replacement is possible for the nondiffuse lights that enter the diffuse medium through the void, and it is formulated as the BIC method. We implemented both coupling methods using the finite element method (FEM) and tested for the circle with a void gap and for a four-layer adult head model. For mean time of flight, the BIC shows better agreement with Monte Carlo (MC) simulation results than BBC. For intensity, BIC shows a comparable match with MC data compared with that of BBC. The effect of absorption of the clear layer in the adult head model was investigated. Both mean time and intensity decrease as absorption of the clear layer increases.

  11. Phase transition in the spin- 3 / 2 Blume-Emery-Griffiths model with antiferromagnetic second neighbor interactions

    NASA Astrophysics Data System (ADS)

    Yezli, M.; Bekhechi, S.; Hontinfinde, F.; EZ-Zahraouy, H.

    2016-04-01

    Two nonperturbative methods such as Monte-Carlo simulation (MC) and Transfer-Matrix Finite-Size-Scaling calculations (TMFSS) have been used to study the phase transition of the spin- 3 / 2 ​Blume-Emery-Griffiths model (BEG) with quadrupolar and antiferromagnetic next-nearest-neighbor exchange interactions. Ground state and finite temperature phase diagrams are obtained by means of these two methods. New degenerate phases are found and only second order phase transitions occur for all values of the parameter interactions. No sign of the intermediate phase is found from both methods. Critical exponents are also obtained from TMFSS calculations. Ising criticality and nonuniversal behaviors are observed depending on the strength of the second neighbor interaction.

  12. Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.

    2016-03-01

    The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.

  13. Self-Consistent Monte Carlo Study of the Coulomb Interaction under Nano-Scale Device Structures

    NASA Astrophysics Data System (ADS)

    Sano, Nobuyuki

    2011-03-01

    It has been pointed that the Coulomb interaction between the electrons is expected to be of crucial importance to predict reliable device characteristics. In particular, the device performance is greatly degraded due to the plasmon excitation represented by dynamical potential fluctuations in high-doped source and drain regions by the channel electrons. We employ the self-consistent 3D Monte Carlo (MC) simulations, which could reproduce both the correct mobility under various electron concentrations and the collective plasma waves, to study the physical impact of dynamical potential fluctuations on device performance under the Double-gate MOSFETs. The average force experienced by an electron due to the Coulomb interaction inside the device is evaluated by performing the self-consistent MC simulations and the fixed-potential MC simulations without the Coulomb interaction. Also, the band-tailing associated with the local potential fluctuations in high-doped source region is quantitatively evaluated and it is found that the band-tailing becomes strongly dependent of position in real space even inside the uniform source region. This work was partially supported by Grants-in-Aid for Scientific Research B (No. 2160160) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.

  14. Proton therapy treatment monitoring with the DoPET system: activity range, positron emitters evaluation and comparison with Monte Carlo predictions

    NASA Astrophysics Data System (ADS)

    Muraro, S.; Battistoni, G.; Belcari, N.; Bisogni, M. G.; Camarlinghi, N.; Cristoforetti, L.; Del Guerra, A.; Ferrari, A.; Fracchiolla, F.; Morrocchi, M.; Righetto, R.; Sala, P.; Schwarz, M.; Sportelli, G.; Topi, A.; Rosso, V.

    2017-12-01

    Ion beam irradiations can deliver conformal dose distributions minimizing damage to healthy tissues thanks to their characteristic dose profiles. Nevertheless, the location of the Bragg peak can be affected by different sources of range uncertainties: a critical issue is the treatment verification. During the treatment delivery, nuclear interactions between the ions and the irradiated tissues generate β+ emitters: the detection of this activity signal can be used to perform the treatment monitoring if an expected activity distribution is available for comparison. Monte Carlo (MC) codes are widely used in the particle therapy community to evaluate the radiation transport and interaction with matter. In this work, FLUKA MC code was used to simulate the experimental conditions of irradiations performed at the Proton Therapy Center in Trento (IT). Several mono-energetic pencil beams were delivered on phantoms mimicking human tissues. The activity signals were acquired with a PET system (DoPET) based on two planar heads, and designed to be installed along the beam line to acquire data also during the irradiation. Different acquisitions are analyzed and compared with the MC predictions, with a special focus on validating the PET detectors response for activity range verification.

  15. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  16. Identification of transmissivity fields using a Bayesian strategy and perturbative approach

    NASA Astrophysics Data System (ADS)

    Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.

    2017-10-01

    The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.

  17. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  18. Advanced proton beam dosimetry part II: Monte Carlo vs. pencil beam-based planning for lung cancer.

    PubMed

    Maes, Dominic; Saini, Jatinder; Zeng, Jing; Rengan, Ramesh; Wong, Tony; Bowen, Stephen R

    2018-04-01

    Proton pencil beam (PB) dose calculation algorithms have limited accuracy within heterogeneous tissues of lung cancer patients, which may be addressed by modern commercial Monte Carlo (MC) algorithms. We investigated clinical pencil beam scanning (PBS) dose differences between PB and MC-based treatment planning for lung cancer patients. With IRB approval, a comparative dosimetric analysis between RayStation MC and PB dose engines was performed on ten patient plans. PBS gantry plans were generated using single-field optimization technique to maintain target coverage under range and setup uncertainties. Dose differences between PB-optimized (PBopt), MC-recalculated (MCrecalc), and MC-optimized (MCopt) plans were recorded for the following region-of-interest metrics: clinical target volume (CTV) V95, CTV homogeneity index (HI), total lung V20, total lung V RX (relative lung volume receiving prescribed dose or higher), and global maximum dose. The impact of PB-based and MC-based planning on robustness to systematic perturbation of range (±3% density) and setup (±3 mm isotropic) was assessed. Pairwise differences in dose parameters were evaluated through non-parametric Friedman and Wilcoxon sign-rank testing. In this ten-patient sample, CTV V95 decreased significantly from 99-100% for PBopt to 77-94% for MCrecalc and recovered to 99-100% for MCopt (P<10 -5 ). The median CTV HI (D95/D5) decreased from 0.98 for PBopt to 0.91 for MCrecalc and increased to 0.95 for MCopt (P<10 -3 ). CTV D95 robustness to range and setup errors improved under MCopt (ΔD95 =-1%) compared to MCrecalc (ΔD95 =-6%, P=0.006). No changes in lung dosimetry were observed for large volumes receiving low to intermediate doses (e.g., V20), while differences between PB-based and MC-based planning were noted for small volumes receiving high doses (e.g., V RX ). Global maximum patient dose increased from 106% for PBopt to 109% for MCrecalc and 112% for MCopt (P<10 -3 ). MC dosimetry revealed a reduction in target dose coverage under PB-based planning that was regained under MC-based planning along with improved plan robustness. MC-based optimization and dose calculation should be integrated into clinical planning workflows of lung cancer patients receiving actively scanned proton therapy.

  19. TU-AB-BRC-11: Moving a GPU-OpenCL-Based Monte Carlo (MC) Dose Engine Towards Routine Clinical Use: Automatic Beam Commissioning and Efficient Source Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Folkerts, M; Jiang, S

    Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less

  20. A review of theoretical study of graphene chemical vapor deposition synthesis on metals: nucleation, growth, and the role of hydrogen and oxygen

    NASA Astrophysics Data System (ADS)

    Rezwan Habib, Mohammad; Liang, Tao; Yu, Xuegong; Pi, Xiaodong; Liu, Yingchun; Xu, Mingsheng

    2018-03-01

    Graphene has attracted intense research interest due to its extraordinary properties and great application potential. Various methods have been proposed for the synthesis of graphene, among which chemical vapor deposition has drawn a great deal of attention for synthesizing large-area and high-quality graphene. Theoretical understanding of the synthesis mechanism is crucial for optimizing the experimental design for desired graphene production. In this review, we discuss the three fundamental steps of graphene synthesis in details, i.e. (1) decomposition of carbon feedstocks and formation of various active carbon species, (2) nucleation, and (3) attachment and extension. We provide a complete scenario of graphene synthesis on metal surfaces at atomistic level by means of density functional theory, molecular dynamics (MD), Monte Carlo (MC) and their combination and interface with other simulation methods such as quantum mechanical molecular dynamics, density functional tight binding molecular dynamics, and combination of MD and MC. We also address the latest investigation of the influences of the hydrogen and oxygen on the synthesis and the quality of the synthesized graphene.

  1. A LAMMPS implementation of volume-temperature replica exchange molecular dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Liang-Chun; Kuo, Jer-Lai

    2015-04-01

    A driver module for executing volume-temperature replica exchange molecular dynamics (VTREMD) was developed for the LAMMPS package. As a patch code, the VTREMD module performs classical molecular dynamics (MD) with Monte Carlo (MC) decisions between MD runs. The goal of inserting the MC step was to increase the breadth of sampled configurational space. In this method, states receive better sampling by making temperature or density swaps with their neighboring states. As an accelerated sampling method, VTREMD is particularly useful to explore states at low temperatures, where systems are easily trapped in local potential wells. As functional examples, TIP4P/Ew and TIP4P/2005 water models were analyzed using VTREMD. The phase diagram in this study covered the deeply supercooled regime, and this test served as a suitable demonstration of the usefulness of VTREMD in overcoming the slow dynamics problem. To facilitate using the current code, attention was also paid on how to optimize the exchange efficiency by using grid allocation. VTREMD was useful for studying systems with rough energy landscapes, such as those with numerous local minima or multiple characteristic time scales.

  2. Calibration of Ge gamma-ray spectrometers for complex sample geometries and matrices

    NASA Astrophysics Data System (ADS)

    Semkow, T. M.; Bradt, C. J.; Beach, S. E.; Haines, D. K.; Khan, A. J.; Bari, A.; Torres, M. A.; Marrantino, J. C.; Syed, U.-F.; Kitto, M. E.; Hoffman, T. J.; Curtis, P.

    2015-11-01

    A comprehensive study of the efficiency calibration and calibration verification of Ge gamma-ray spectrometers was performed using semi-empirical, computational Monte-Carlo (MC), and transfer methods. The aim of this study was to evaluate the accuracy of the quantification of gamma-emitting radionuclides in complex matrices normally encountered in environmental and food samples. A wide range of gamma energies from 59.5 to 1836.0 keV and geometries from a 10-mL jar to 1.4-L Marinelli beaker were studied on four Ge spectrometers with the relative efficiencies between 102% and 140%. Density and coincidence summing corrections were applied. Innovative techniques were developed for the preparation of artificial complex matrices from materials such as acidified water, polystyrene, ethanol, sugar, and sand, resulting in the densities ranging from 0.3655 to 2.164 g cm-3. They were spiked with gamma activity traceable to international standards and used for calibration verifications. A quantitative method of tuning MC calculations to experiment was developed based on a multidimensional chi-square paraboloid.

  3. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    NASA Astrophysics Data System (ADS)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  4. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    PubMed

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  5. A kMC-MD method with generalized move-sets for the simulation of folding of α-helical and β-stranded peptides.

    PubMed

    Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma

    2015-04-14

    In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.

  6. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-01

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  7. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: microscopic calculation of the Chapman-Jouguet state.

    PubMed

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-28

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  8. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    NASA Astrophysics Data System (ADS)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.

  9. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  10. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  11. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  12. SU-E-J-205: Monte Carlo Modeling of Ultrasound Probes for Real-Time Ultrasound Image-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hristov, D; Schlosser, J; Bazalova, M

    2014-06-01

    Purpose: To quantify the effect of ultrasound (US) probe beam attenuation for radiation therapy delivered under real-time US image guidance by means of Monte Carlo (MC) simulations. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their CT images in the EGSnrc BEAMnrc and DOSXYZnrc codes. Due to the metal parts, the probes were scanned in a Tomotherapy machine with a 3.5 MV beam. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2–8.0 g/cm{sup 3}.more » Beam attenuation due to the probes was measured in a solid water phantom for a 6 MV and 15 MV 15x15 cm{sup 2} beam delivered on a Varian Trilogy linear accelerator. The dose was measured with the PTW-729 ionization chamber array at two depths and compared to MC simulations. The extreme case beam attenuation expected in robotic US image guided radiotherapy for probes in upright position was quantified by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities were 4.6 and 4.2 g/cm{sup 3} in the C5-2 and X6-1 probe, respectively. Gamma analysis of the simulated and measured doses revealed that over 98% of measurement points passed the 3%/3mm criteria for both probes and measurement depths. The extreme attenuation for probes in upright position was found to be 25% and 31% for the C5-2 and X6-1 probe, respectively, for both 6 and 15 MV beams at 10 cm depth. Conclusion: MC models of two US probes used for real-time image guidance during radiotherapy have been built. As a Result, radiotherapy treatment planning with the imaging probes in place can now be performed. J Schlosser is an employee of SoniTrack Systems, Inc. D Hristov has financial interest in SoniTrack Systems, Inc.« less

  13. Microcanonical ensemble simulation method applied to discrete potential fluids

    NASA Astrophysics Data System (ADS)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002), 10.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties.

  14. Study of extracerebral contamination for three cerebral oximeters by Monte Carlo simulation using CT data

    NASA Astrophysics Data System (ADS)

    Tarasov, A. P.; Egorov, A. I.; Rogatkin, D. A.

    2017-07-01

    Using multidetector computed tomography, thicknesses of bone squame and soft tissues of human head were assessed. MC simulation revealed impropriety of source-detector separation distances for 3 oximeters, which can cause extracerebral contamination.

  15. Mesoscale Thermodynamically motivated Statistical Mechanics based Kinetic Model for Sintering monoliths

    NASA Astrophysics Data System (ADS)

    Mohan, Nisha

    Modeling the evolution of microstructure during sintering is a persistent challenge in ceramics science, although needed as the microstructure impacts properties of an engineered material. Bridging the gap between microscopic and continuum models, kinetic Monte Carlo (kMC) methods provide a stochastic approach towards sintering and microstructure evolution. These kMC models work at the mesoscale, with length and time-scales between those of atomistic and continuum approaches. We develop a sintering/compacting model for the two-phase sintering of boron nitride ceramics and allotropes alike. Our formulation includes mechanisms for phase transformation between h-BN and c-BN and takes into account thermodynamics of pressure and temperature on interaction energies and mechanism rates. In addition to replicating the micro-structure evolution observed in experiments, it also captures the phase diagram of Boron Nitride materials. Results have been analyzed in terms of phase diagrams and crystal growth. It also serves with insights to guide the choice of additives and conditions for the sintering process.While detailed time and spatial resolutions are lost in any MC, the progression of stochastic events still captures plausible local energy minima and long-time temporal developments. DARPA.

  16. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed differences (up to 8%) within the assigned error (+/- 6%). ANOVA test demonstrated that the calibration results (in terms of thresholds or RCs at various volumes) obtained by MC simulations were indistinguishable from those obtained experimentally. The accuracy in volume determination for the simulated hot spheres was between -9% and 15% in the range 4-270 ml, whereas for volumes less than 4 ml (in the range 1-3 ml) the difference increased abruptly reaching values greater than 100%. For the Zubal head phantom, errors ranged between 9% and 18%. For the experimental test images, the accuracy level was within +/- 10%, for volumes in the range 20-110 ml. The preliminary test of application on patients evidenced the suitability of the method in a clinical setting. The MC-guided delineation of tumor volume may reduce the acquisition time required for the experimental calibration. Analysis of images of several simulated and experimental test objects, Zubal head phantom and clinical cases demonstrated the robustness, suitability, accuracy, and speed of the proposed method. Nevertheless, studies concerning tumors of irregular shape and/or nonuniform distribution of the background activity are still in progress.

  17. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation

    PubMed Central

    Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048

  18. Coarse-grained stochastic processes and kinetic Monte Carlo simulators for the diffusion of interacting particles

    NASA Astrophysics Data System (ADS)

    Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2003-11-01

    We derive a hierarchy of successively coarse-grained stochastic processes and associated coarse-grained Monte Carlo (CGMC) algorithms directly from the microscopic processes as approximations in larger length scales for the case of diffusion of interacting particles on a lattice. This hierarchy of models spans length scales between microscopic and mesoscopic, satisfies a detailed balance, and gives self-consistent fluctuation mechanisms whose noise is asymptotically identical to the microscopic MC. Rigorous, detailed asymptotics justify and clarify these connections. Gradient continuous time microscopic MC and CGMC simulations are compared under far from equilibrium conditions to illustrate the validity of our theory and delineate the errors obtained by rigorous asymptotics. Information theory estimates are employed for the first time to provide rigorous error estimates between the solutions of microscopic MC and CGMC, describing the loss of information during the coarse-graining process. Simulations under periodic boundary conditions are used to verify the information theory error estimates. It is shown that coarse-graining in space leads also to coarse-graining in time by q2, where q is the level of coarse-graining, and overcomes in part the hydrodynamic slowdown. Operation counting and CGMC simulations demonstrate significant CPU savings in continuous time MC simulations that vary from q3 for short potentials to q4 for long potentials. Finally, connections of the new coarse-grained stochastic processes to stochastic mesoscopic and Cahn-Hilliard-Cook models are made.

  19. Monte Carlo calculations of positron emitter yields in proton radiotherapy.

    PubMed

    Seravalli, E; Robert, C; Bauer, J; Stichelbaut, F; Kurz, C; Smeets, J; Van Ngoc Ty, C; Schaart, D R; Buvat, I; Parodi, K; Verhaegen, F

    2012-03-21

    Positron emission tomography (PET) is a promising tool for monitoring the three-dimensional dose distribution in charged particle radiotherapy. PET imaging during or shortly after proton treatment is based on the detection of annihilation photons following the ß(+)-decay of radionuclides resulting from nuclear reactions in the irradiated tissue. Therapy monitoring is achieved by comparing the measured spatial distribution of irradiation-induced ß(+)-activity with the predicted distribution based on the treatment plan. The accuracy of the calculated distribution depends on the correctness of the computational models, implemented in the employed Monte Carlo (MC) codes that describe the interactions of the charged particle beam with matter and the production of radionuclides and secondary particles. However, no well-established theoretical models exist for predicting the nuclear interactions and so phenomenological models are typically used based on parameters derived from experimental data. Unfortunately, the experimental data presently available are insufficient to validate such phenomenological hadronic interaction models. Hence, a comparison among the models used by the different MC packages is desirable. In this work, starting from a common geometry, we compare the performances of MCNPX, GATE and PHITS MC codes in predicting the amount and spatial distribution of proton-induced activity, at therapeutic energies, to the already experimentally validated PET modelling based on the FLUKA MC code. In particular, we show how the amount of ß(+)-emitters produced in tissue-like media depends on the physics model and cross-sectional data used to describe the proton nuclear interactions, thus calling for future experimental campaigns aiming at supporting improvements of MC modelling for clinical application of PET monitoring. © 2012 Institute of Physics and Engineering in Medicine

  20. Multileaf collimator tongue-and-groove effect on depth and off-axis doses: A comparison of treatment planning data with measurements and Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hee Jung; Department of Biomedical Engineering, Seoul National University, Seoul; Department of Radiation Oncology, Soonchunhyang University Hospital, Seoul

    2015-01-01

    To investigate how accurately treatment planning systems (TPSs) account for the tongue-and-groove (TG) effect, Monte Carlo (MC) simulations and radiochromic film (RCF) measurements were performed for comparison with TPS results. Two commercial TPSs computed the TG effect for Varian Millennium 120 multileaf collimator (MLC). The TG effect on off-axis dose profile at 3 depths of solid water was estimated as the maximum depth and the full width at half maximum (FWHM) of the dose dip at an interleaf position. When compared with the off-axis dose of open field, the maximum depth of the dose dip for MC and RCF rangedmore » from 10.1% to 20.6%; the maximum depth of the dose dip gradually decreased by up to 8.7% with increasing depths of 1.5 to 10 cm and also by up to 4.1% with increasing off-axis distances of 0 to 13 cm. However, TPS results showed at most a 2.7% decrease for the same depth range and a negligible variation for the same off-axis distances. The FWHM of the dose dip was approximately 0.19 cm for MC and 0.17 cm for RCF, but 0.30 cm for Eclipse TPS and 0.45 cm for Pinnacle TPS. Accordingly, the integrated value of TG dose dip for TPS was larger than that for MC and RCF and almost invariant along the depths and off-axis distances. We concluded that the TG dependence on depth and off-axis doses shown in the MC and RCF results could not be appropriately modeled by the TPS versions in this study.« less

  1. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  2. An improved Monte-Carlo model of the Varian EPID separating support arm and rear-housing backscatter

    NASA Astrophysics Data System (ADS)

    Monville, M. E.; Kuncic, Z.; Greer, P. B.

    2014-03-01

    Previous investigators of EPID dosimetric properties have ascribed the backscatter, that contaminates dosimetric EPID images, to its supporting arm. Accordingly, Monte-Carlo (MC) EPID models have approximated the backscatter signal from the layers under the detector and the robotic support arm using either uniform or non-uniform solid water slabs, or through convolutions with back-scatter kernels. The aim of this work is to improve the existent MC models by measuring and modelling the separate backscatter contributions of the robotic arm and the rear plastic housing of the EPID. The EPID plastic housing is non-uniform with a 11.9 cm wide indented section that runs across the cross-plane direction in the superior half of the EPID which is 1.75 cm closer to the EPID sensitive layer than the rest of the housing. The thickness of the plastic housing is 0.5 cm. Experiments were performed with and without the housing present by removing all components of the EPID from the housing. The robotic support arm was not present for these measurements. A MC model of the linear accelerator and the EPID was modified to include the rear-housing indentation and results compared to the measurement. The rear housing was found to contribute a maximum of 3% additional signal. The rear housing contribution to the image is non-uniform in the in-plane direction with 2% asymmetry across the central 20 cm of an image irradiating the entire detector. The MC model was able to reproduce this non-uniform contribution. The EPID rear housing contributes a non-uniform backscatter component to the EPID image, which has not been previously characterized. This has been incorporated into an improved MC model of the EPID.

  3. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations.

    PubMed

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  4. A Study of Neutron Leakage in Finite Objects

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.

  5. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ono, T; Araki, F

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms.more » Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.« less

  6. SU-G-BRC-10: Feasibility of a Web-Based Monte Carlo Simulation Tool for Dynamic Electron Arc Radiotherapy (DEAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, A; Wu, Q; Sawkey, D

    Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less

  7. MO-E-17A-03: Monte Carlo CT Dose Calculation: A Comparison Between Experiment and Simulation Using ARCHER-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Du, X; Su, L

    2014-06-15

    Purpose: To compare the CT doses derived from the experiments and GPU-based Monte Carlo (MC) simulations, using a human cadaver and ATOM phantom. Methods: The cadaver of an 88-year old male and the ATOM phantom were scanned by a GE LightSpeed Pro 16 MDCT. For the cadaver study, the Thimble chambers (Model 10×5−0.6CT and 10×6−0.6CT) were used to measure the absorbed dose in different deep and superficial organs. Whole-body scans were first performed to construct a complete image database for MC simulations. Abdomen/pelvis helical scans were then conducted using 120/100 kVps, 300 mAs and a pitch factor of 1.375:1. Formore » the ATOM phantom study, the OSL dosimeters were used and helical scans were performed using 120 kVp and x, y, z tube current modulation (TCM). For the MC simulations, sufficient particles were run in both cases such that the statistical errors of the results by ARCHER-CT were limited to 1%. Results: For the human cadaver scan, the doses to the stomach, liver, colon, left kidney, pancreas and urinary bladder were compared. The difference between experiments and simulations was within 19% for the 120 kVp and 25% for the 100 kVp. For the ATOM phantom scan, the doses to the lung, thyroid, esophagus, heart, stomach, liver, spleen, kidneys and thymus were compared. The difference was 39.2% for the esophagus, and within 16% for all other organs. Conclusion: In this study the experimental and simulated CT doses were compared. Their difference is primarily attributed to the systematic errors of the MC simulations, including the accuracy of the bowtie filter modeling, and the algorithm to generate voxelized phantom from DICOM images. The experimental error is considered small and may arise from the dosimeters. R01 grant (R01EB015478) from National Institute of Biomedical Imaging and Bioengineering.« less

  8. WE-EF-207-05: Monte Carlo Dosimetry for a Dedicated Cone-Beam CT Head Scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Xu, J

    Purpose: Cone-Beam CT (CBCT) is an attractive platform for point-of-care imaging of traumatic brain injury and intracranial hemorrhage. This work implements and evaluates a fast Monte-Carlo (MC) dose estimation engine for development of a dedicated head CBCT scanner, optimization of acquisition protocols, geometry, bowtie filter designs, and patient-specific dosimetry. Methods: Dose scoring with a GPU-based MC CBCT simulator was validated on an imaging bench using a modified 16 cm CTDI phantom with 7 ion chamber shafts along the central ray for 80–100 kVp (+2 mm Al, +0.2 mm Cu). Dose distributions were computed in a segmented CBCT reconstruction of anmore » anthropomorphic head phantom with 4×10{sup 5} tracked photons per scan (5 min runtime). Circular orbits with angular span ranging from short scan (180° + fan angle) to full rotation (360°) were considered for fixed total mAs per scan. Two aluminum filters were investigated: aggressive bowtie, and moderate bowtie (matched to 16 cm and 32 cm water cylinder, respectively). Results: MC dose estimates showed strong agreement with measurements (RMSE<0.001 mGy/mAs). A moderate (aggressive) bowtie reduced the dose, per total mAs, by 20% (30%) at the center of the head, by 40% (50%) at the eye lens, and by 70% (80%) at the posterior skin entrance. For the no bowtie configuration, a short scan reduced the eye lens dose by 62% (from 0.08 mGy/mAs to 0.03 mGy/mAs) compared to full scan, although the dose to spinal bone marrow increased by 40%. For both bowties, the short scan resulted in a similar 40% increase in bone marrow dose, but the reduction in the eye lens was more pronounced: 70% (90%) for the moderate (aggressive) bowtie. Conclusions: Dose maps obtained with validated MC simulation demonstrated dose reduction in sensitive structures (eye lens and bone marrow) through combination of short-scan trajectories and bowtie filters. Xiaohui Wang and David Foos are employees of Carestream Health.« less

  9. Performance prediction for silicon photonics integrated circuits with layout-dependent correlated manufacturing variability.

    PubMed

    Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas

    2017-05-01

    This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.

  10. Dynamic PET simulator via tomographic emission projection for kinetic modeling and parametric image studies.

    PubMed

    Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross

    2016-06-01

    To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.

  11. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liemert, André, E-mail: andre.liemert@ilm.uni-ulm.de; Kienle, Alwin

    Purpose: Explicit solutions of the monoenergetic radiative transport equation in the P{sub 3} approximation have been derived which can be evaluated with nearly the same computational effort as needed for solving the standard diffusion equation (DE). In detail, the authors considered the important case of a semi-infinite medium which is illuminated by a collimated beam of light. Methods: A combination of the classic spherical harmonics method and the recently developed method of rotated reference frames is used for solving the P{sub 3} equations in closed form. Results: The derived solutions are illustrated and compared to exact solutions of the radiativemore » transport equation obtained via the Monte Carlo (MC) method as well as with other approximated analytical solutions. It is shown that for the considered cases which are relevant for biomedical optics applications, the P{sub 3} approximation is close to the exact solution of the radiative transport equation. Conclusions: The authors derived exact analytical solutions of the P{sub 3} equations under consideration of boundary conditions for defining a semi-infinite medium. The good agreement to Monte Carlo simulations in the investigated domains, for example, in the steady-state and time domains, as well as the short evaluation time needed suggests that the derived equations can replace the often applied solutions of the diffusion equation for the homogeneous semi-infinite medium.« less

  13. Monte Carlo isotopic inventory analysis for complex nuclear systems

    NASA Astrophysics Data System (ADS)

    Phruksarojanakun, Phiphat

    Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.

  14. Simultaneous 99mtc/111in spect reconstruction using accelerated convolution-based forced detection monte carlo

    NASA Astrophysics Data System (ADS)

    Karamat, Muhammad I.; Farncombe, Troy H.

    2015-10-01

    Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.

  15. Evaluation of interpolation methods for TG-43 dosimetric parameters based on comparison with Monte Carlo data for high-energy brachytherapy sources.

    PubMed

    Pujades-Claumarchirant, Ma Carmen; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo; Melhus, Christopher; Rivard, Mark

    2010-03-01

    The aim of this work was to determine dose distributions for high-energy brachytherapy sources at spatial locations not included in the radial dose function g L ( r ) and 2D anisotropy function F ( r , θ ) table entries for radial distance r and polar angle θ . The objectives of this study are as follows: 1) to evaluate interpolation methods in order to accurately derive g L ( r ) and F ( r , θ ) from the reported data; 2) to determine the minimum number of entries in g L ( r ) and F ( r , θ ) that allow reproduction of dose distributions with sufficient accuracy. Four high-energy photon-emitting brachytherapy sources were studied: 60 Co model Co0.A86, 137 Cs model CSM-3, 192 Ir model Ir2.A85-2, and 169 Yb hypothetical model. The mesh used for r was: 0.25, 0.5, 0.75, 1, 1.5, 2-8 (integer steps) and 10 cm. Four different angular steps were evaluated for F ( r , θ ): 1°, 2°, 5° and 10°. Linear-linear and logarithmic-linear interpolation was evaluated for g L ( r ). Linear-linear interpolation was used to obtain F ( r , θ ) with resolution of 0.05 cm and 1°. Results were compared with values obtained from the Monte Carlo (MC) calculations for the four sources with the same grid. Linear interpolation of g L ( r ) provided differences ≤ 0.5% compared to MC for all four sources. Bilinear interpolation of F ( r , θ ) using 1° and 2° angular steps resulted in agreement ≤ 0.5% with MC for 60 Co, 192 Ir, and 169 Yb, while 137 Cs agreement was ≤ 1.5% for θ < 15°. The radial mesh studied was adequate for interpolating g L ( r ) for high-energy brachytherapy sources, and was similar to commonly found examples in the published literature. For F ( r , θ ) close to the source longitudinal-axis, polar angle step sizes of 1°-2° were sufficient to provide 2% accuracy for all sources.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, C; Nguyen, G; Chung, Y

    Purpose: Ureteroscopy involves fluoroscopy which potentially results in considerable amount of radiation dose to the patient. Purpose of this study was two-fold: (a) to develop the effective dose computational model for obese and non-obese patients undergoing left and right ureteroscopy, and (b) to evaluate the utility of a commercial Monte Carlo software for dose assessment in ureteroscopy. Methods: Organ dose measurements were performed on an adult male anthropomorphic phantom, representing the non-obese patients, with 20 high-sensitivity MOSFET detectors and two 0.18cc ionization chambers placed in selected organs. Fat-equivalent paddings were placed around the abdominal region to simulate for obese patients.more » Effective dose (ED) was calculated using ICRP 103 tissue weighting factors and normalized to the effective dose rate in miliSivert per second (mSv/s). In addition, a commercial Monte Carlo (MC) dose estimation program was used to estimate ED for the non-obese model, with table attenuation correction applied to simulate clinical procedure. Results: For the equipment and protocols involved in this study, the MOSFETderived ED rates for the obese patient model (‘Left’: 0.0092±0.0004 mSv/s; ‘Right’: 0.0086±0.0004 mSv/s) was found to be more than twice as much as that to the non-obese patient model (‘Left’: 0.0041±0.0003 mSv/s; ‘Right’: 0.0036±0.0007 mSv/s). The MC-derived ED rates for the non-obese patient model (‘Left’: 0.0041 mSv/s; ‘Right’: 0.0036 mSv/s; with statistical uncertainty of 1%) showed a good agreement with the MOSFET method. Conclusion: The significant difference in ED rate between the obese and non-obese patient models shows the limitation of directly applying commercial softwares for obese patients and leading to considerable underestimation of ED. Although commercial softwares offer a convenient means of dose estimation, but the utility may be limited to standard-man geometry as the software does not account for table attenuation, obese patient geometry, and differences between the anthropomorphic phantom and MC mathematical phantom.« less

  17. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulations Guided by a Coarse-Grained Model.

    PubMed

    Chen, Yunjie; Roux, Benoît

    2015-08-11

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water.

  18. Enhanced Sampling of an Atomic Model with Hybrid Nonequilibrium Molecular Dynamics—Monte Carlo Simulations Guided by a Coarse-Grained Model

    PubMed Central

    2015-01-01

    Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up to 21 times for polyalanine and (AAQAA)3 in water. PMID:26574442

  19. Methods for modeling non-equilibrium degenerate statistics and quantum-confined scattering in 3D ensemble Monte Carlo transport simulations

    NASA Astrophysics Data System (ADS)

    Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.

    2016-12-01

    Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.

  20. Atomistic Simulations of High-intensity XFEL Pulses on Diffractive Imaging of Nano-sized System Dynamics

    NASA Astrophysics Data System (ADS)

    Ho, Phay; Knight, Christopher; Bostedt, Christoph; Young, Linda; Tegze, Miklos; Faigel, Gyula

    2016-05-01

    We have developed a large-scale atomistic computational method based on a combined Monte Carlo and Molecular Dynamics (MC/MD) method to simulate XFEL-induced radiation damage dynamics of complex materials. The MD algorithm is used to propagate the trajectories of electrons, ions and atoms forward in time and the quantum nature of interactions with an XFEL pulse is accounted for by a MC method to calculate probabilities of electronic transitions. Our code has good scalability with MPI/OpenMP parallelization, and it has been run on Mira, a petascale system at the Argonne Leardership Computing Facility, with particle number >50 million. Using this code, we have examined the impact of high-intensity 8-keV XFEL pulses on the x-ray diffraction patterns of argon clusters. The obtained patterns show strong pulse parameter dependence, providing evidence of significant lattice rearrangement and diffuse scattering. Real-space electronic reconstruction was performed using phase retrieval methods. We found that the structure of the argon cluster can be recovered with atomic resolution even in the presence of considerable radiation damage. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division.

  1. Gas-surface interactions using accommodation coefficients for a dilute and a dense gas in a micro- or nanochannel: heat flux predictions using combined molecular dynamics and Monte Carlo techniques.

    PubMed

    Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D

    2014-05-01

    The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.

  2. Kinetic Monte Carlo Simulations of Rod Eutectics and the Surface Roughening Transition in Binary Alloys

    NASA Technical Reports Server (NTRS)

    Bentz, Daniel N.; Betush, William; Jackson, Kenneth A.

    2003-01-01

    In this paper we report on two related topics: Kinetic Monte Carlo simulations of the steady state growth of rod eutectics from the melt, and a study of the surface roughness of binary alloys. We have implemented a three dimensional kinetic Monte Carlo (kMC) simulation with diffusion by pair exchange only in the liquid phase. Entropies of fusion are first chosen to fit the surface roughness of the pure materials, and the bond energies are derived from the equilibrium phase diagram, by treating the solid and liquid as regular and ideal solutions respectively. A simple cubic lattice oriented in the {100} direction is used. Growth of the rods is initiated from columns of pure B material embedded in an A matrix, arranged in a close packed array with semi-periodic boundary conditions. The simulation cells typically have dimensions of 50 by 87 by 200 unit cells. Steady state growth is compliant with the Jackson-Hunt model. In the kMC simulations, using the spin-one Ising model, growth of each phase is faceted or nonfaceted phases depending on the entropy of fusion. There have been many studies of the surface roughening transition in single component systems, but none for binary alloy systems. The location of the surface roughening transition for the phases of a eutectic alloy determines whether the eutectic morphology will be regular or irregular. We have conducted a study of surface roughness on the spin-one Ising Model with diffusion using kMC. The surface roughness was found to scale with the melting temperature of the alloy as given by the liquidus line on the equilibrium phase diagram. The density of missing lateral bonds at the surface was used as a measure of surface roughness.

  3. On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.

    PubMed

    Thomson, Rowan M; Kawrakow, Iwan

    2011-08-01

    The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.

  4. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  5. Patient-specific CT dosimetry calculation: a feasibility study.

    PubMed

    Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W

    2011-11-15

    Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.

  6. Recent developments and comprehensive evaluations of a GPU-based Monte Carlo package for proton therapy

    PubMed Central

    Qin, Nan; Botas, Pablo; Giantsoudi, Drosoula; Schuemann, Jan; Tian, Zhen; Jiang, Steve B.; Paganetti, Harald; Jia, Xun

    2016-01-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a GPU-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1mm) was improved from 82.7% to 93.1%. Average relative difference in LETd between gPMC and TOPAS was 1.7%. Average relative differences in dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8 to 17 seconds on an AMD Radeon R9 290x GPU to simulate 107 source protons, achieving less than 1% average statistical uncertainty. As beam size was reduced from 10×10 cm2 to 1×1 cm2, time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only one counter. With the OpenCL environment, the portability of gPMC v2.0 was enhanced. It was successfully executed on different CPUs and GPUs and its performance on different devices varied depending on processing power and hardware structure. PMID:27694712

  7. Monte carlo simulations of the n_TOF lead spallation target with the Geant4 toolkit: A benchmark study

    NASA Astrophysics Data System (ADS)

    Lerendegui-Marco, J.; Cortés-Giraldo, M. A.; Guerrero, C.; Quesada, J. M.; Meo, S. Lo; Massimi, C.; Barbagallo, M.; Colonna, N.; Mancussi, D.; Mingrone, F.; Sabaté-Gilarte, M.; Vannini, G.; Vlachoudis, V.; Aberle, O.; Andrzejewski, J.; Audouin, L.; Bacak, M.; Balibrea, J.; Bečvář, F.; Berthoumieux, E.; Billowes, J.; Bosnar, D.; Brown, A.; Caamaño, M.; Calviño, F.; Calviani, M.; Cano-Ott, D.; Cardella, R.; Casanovas, A.; Cerutti, F.; Chen, Y. H.; Chiaveri, E.; Cortés, G.; Cosentino, L.; Damone, L. A.; Diakaki, M.; Domingo-Pardo, C.; Dressler, R.; Dupont, E.; Durán, I.; Fernández-Domínguez, B.; Ferrari, A.; Ferreira, P.; Finocchiaro, P.; Göbel, K.; Gómez-Hornillos, M. B.; García, A. R.; Gawlik, A.; Gilardoni, S.; Glodariu, T.; Gonçalves, I. F.; González, E.; Griesmayer, E.; Gunsing, F.; Harada, H.; Heinitz, S.; Heyse, J.; Jenkins, D. G.; Jericha, E.; Käppeler, F.; Kadi, Y.; Kalamara, A.; Kavrigin, P.; Kimura, A.; Kivel, N.; Kokkoris, M.; Krtička, M.; Kurtulgil, D.; Leal-Cidoncha, E.; Lederer, C.; Leeb, H.; Lonsdale, S. J.; Macina, D.; Marganiec, J.; Martínez, T.; Masi, A.; Mastinu, P.; Mastromarco, M.; Maugeri, E. A.; Mazzone, A.; Mendoza, E.; Mengoni, A.; Milazzo, P. M.; Musumarra, A.; Negret, A.; Nolte, R.; Oprea, A.; Patronis, N.; Pavlik, A.; Perkowski, J.; Porras, I.; Praena, J.; Radeck, D.; Rauscher, T.; Reifarth, R.; Rout, P. C.; Rubbia, C.; Ryan, J. A.; Saxena, A.; Schillebeeckx, P.; Schumann, D.; Smith, A. G.; Sosnin, N. V.; Stamatopoulos, A.; Tagliente, G.; Tain, J. L.; Tarifeño-Saldivia, A.; Tassan-Got, L.; Valenta, S.; Variale, V.; Vaz, P.; Ventura, A.; Vlastou, R.; Wallner, A.; Warren, S.; Woods, P. J.; Wright, T.; Žugec, P.

    2017-09-01

    Monte Carlo (MC) simulations are an essential tool to determine fundamental features of a neutron beam, such as the neutron flux or the γ-ray background, that sometimes can not be measured or at least not in every position or energy range. Until recently, the most widely used MC codes in this field had been MCNPX and FLUKA. However, the Geant4 toolkit has also become a competitive code for the transport of neutrons after the development of the native Geant4 format for neutron data libraries, G4NDL. In this context, we present the Geant4 simulations of the neutron spallation target of the n_TOF facility at CERN, done with version 10.1.1 of the toolkit. The first goal was the validation of the intra-nuclear cascade models implemented in the code using, as benchmark, the characteristics of the neutron beam measured at the first experimental area (EAR1), especially the neutron flux and energy distribution, and the time distribution of neutrons of equal kinetic energy, the so-called Resolution Function. The second goal was the development of a Monte Carlo tool aimed to provide useful calculations for both the analysis and planning of the upcoming measurements at the new experimental area (EAR2) of the facility.

  8. The SM and NLO Multileg and SM MC Working Groups: Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alcaraz Maestre, J.; et al.

    2012-03-01

    The 2011 Les Houches workshop was the first to confront LHC data. In the two years since the previous workshop there have been significant advances in both soft and hard QCD, particularly in the areas of multi-leg NLO calculations, the inclusion of those NLO calculations into parton shower Monte Carlos, and the tuning of the non-perturbative parameters of those Monte Carlos. These proceedings describe the theoretical advances that have taken place, the impact of the early LHC data, and the areas for future development.

  9. KMCThinFilm: A C++ Framework for the Rapid Development of Lattice Kinetic Monte Carlo (kMC) Simulations of Thin Film Growth

    DTIC Science & Technology

    2015-09-01

    direction, so if the simulation domain is set to be a certain size, then that presents a hard ceiling on the thickness of a film that may be grown in...FFA, Los J, Cuppen HM, Bennema P, Meekes H. MONTY:  Monte Carlo crystal growth on any crystal structure in any crystallographic orientation...mhoffman.github.io/kmos/. 23. Kiravittaya S, Schmidt OG. Quantum-dot crystal defects. Applied Physics Letters. 2008;93:173109. 24. Leetmaa M

  10. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  11. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  12. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  13. Ion channeling study of defects in compound crystals using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Turos, A.; Jozwik, P.; Nowicki, L.; Sathish, N.

    2014-08-01

    Ion channeling is a well-established technique for determination of structural properties of crystalline materials. Defect depth profiles have been usually determined basing on the two-beam model developed by Bøgh (1968) [1]. As long as the main research interest was focused on single element crystals it was considered as sufficiently accurate. New challenge emerged with growing technological importance of compound single crystals and epitaxial heterostructures. Overlap of partial spectra due to different sublattices and formation of complicated defect structures makes the two beam method hardly applicable. The solution is provided by Monte Carlo computer simulations. Our paper reviews principal aspects of this approach and the recent developments in the McChasy simulation code. The latter made it possible to distinguish between randomly displaced atoms (RDA) and extended defects (dislocations, loops, etc.). Hence, complex defect structures can be characterized by the relative content of these two components. The next refinement of the code consists of detailed parameterization of dislocations and dislocation loops. Defect profiles for variety of compound crystals (GaN, ZnO, SrTiO3) have been measured and evaluated using the McChasy code. Damage accumulation curves for RDA and extended defects revealed non monotonous defect buildup with some characteristic steps. Transition to each stage is governed by the different driving force. As shown by the complementary high resolution XRD measurements lattice strain plays here the crucial role and can be correlated with the concentration of extended defects.

  14. Re-186 and Sm-153 dosimetry based on scintigraphic imaging data in skeletal metastasis palliative treatment and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Andreou, M.; Lagopati, N.; Lyra, M.

    2011-09-01

    Optimum treatment planning of patients suffering from painful skeletal metastases requires accurate calculations concerning absorbed dose in metastatic lesions and critical organs, such as red marrow. Delivering high doses to tumor cells while limiting radiation dose to normal tissue, is the key for successful palliation treatment. The aim of this study is to compare the dosimetric calculations, obtained by Monte Carlo (MC) simulation and the MIRDOSE model, in therapeutic schemes of skeleton metastatic lesions, with Rhenium-186 (Sn) -HEDP and Samarium-153 -EDTMP. A bolus injection of 1295 MBq (35mCi) Re-186- HEDP was infused in 11 patients with multiple skeletal metastases. The administered dose for the 8 patients who received Sm-153 was 1 mCi /kg. Planar scintigraphic images for the two groups of patients were obtained, 24 h, 48 h and 72 h post injection, by an Elscint Apex SPX gamma camera. The images were processed, utilizing ROI quantitative methods, to determine residence times and radionuclide uptakes. Dosimetric calculations were performed using the patient specific scintigraphic data by the MIRDOSE3 code of MIRD. Also, MCNPX was employed, simulating the distribution of the radioisotope in the ROI and calculating the absorbed doses in the metastatic lesion, and in critical organs. Summarizing, there is a good agreement between the results, derived from the two pathways, the patient specific and the mathematical, with a deviation of less than 9% for planar scintigraphic data compared to MC, for both radiopharmaceuticals.

  15. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation

    NASA Astrophysics Data System (ADS)

    Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.

    2015-09-01

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  16. Dosimetric accuracy of a treatment planning system for actively scanned proton beams and small target volumes: Monte Carlo and experimental validation.

    PubMed

    Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M

    2015-09-07

    This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.

  17. A compression algorithm for the combination of PDF sets.

    PubMed

    Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme

    The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

  18. Centrality measures highlight proton traps and access points to proton highways in kinetic Monte Carlo trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krueger, Rachel A.; Haibach, Frederick G.; Fry, Dana L.

    2015-04-21

    A centrality measure based on the time of first returns rather than the number of steps is developed and applied to finding proton traps and access points to proton highways in the doped perovskite oxides: AZr{sub 0.875}D{sub 0.125}O{sub 3}, where A is Ba or Sr and the dopant D is Y or Al. The high centrality region near the dopant is wider in the SrZrO{sub 3} systems than the BaZrO{sub 3} systems. In the aluminum-doped systems, a region of intermediate centrality (secondary region) is found in a plane away from the dopant. Kinetic Monte Carlo (kMC) trajectories show that thismore » secondary region is an entry to fast conduction planes in the aluminum-doped systems in contrast to the highest centrality area near the dopant trap. The yttrium-doped systems do not show this secondary region because the fast conduction routes are in the same plane as the dopant and hence already in the high centrality trapped area. This centrality measure complements kMC by highlighting key areas in trajectories. The limiting activation barriers found via kMC are in very good agreement with experiments and related to the barriers to escape dopant traps.« less

  19. Assessment of mean-field microkinetic models for CO methanation on stepped metal surfaces using accelerated kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten

    2017-10-01

    First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.

  20. Monte Carlo grain growth modeling with local temperature gradients

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.

    2017-09-01

    This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.

Top