Science.gov

Sample records for based monte carlo

  1. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  2. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  3. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  4. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  5. Skin image reconstruction using Monte Carlo based color generation

    NASA Astrophysics Data System (ADS)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji

    2010-11-01

    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  6. Rocket plume radiation base heating by reverse Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Everson, John; Nelson, H. F.

    1993-10-01

    A reverse Monte Carlo radiative transfer code is developed to predict rocket plume base heating. It is more computationally efficient than the forward Monte Carlo method, because only the radiation that strikes the receiving point is considered. The method easily handles both gas and particle emission and particle scattering. Band models are used for the molecular emission spectra, and the Henyey-Greenstein phase function is used for the scattering. Reverse Monte Carlo predictions are presented for (1) a gas-only model of the Space Shuttle main engine plume; (2) a purescattering plume with the radiation emitted by a hot disk at the nozzle exit; (3) a nonuniform temperature, scattering, emitting and absorbing plume; and (4) a typical solid rocket motor plume. The reverse Monte Carlo method is shown to give good agreement with previous predictions. Typical solid rocket plume results show that (1) CO2 radiation is emitted from near the edge of the plume; (2) H2O gas and Al2O3 particles emit radiation mainly from the center of the plume; and (3) Al2O3 particles emit considerably more radiation than the gases over the 400-17,000 cm(exp -1) spectral interval.

  7. A tetrahedron-based inhomogeneous Monte Carlo optical simulator

    PubMed Central

    Shen, H; Wang, G

    2010-01-01

    Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program ‘MCML’ was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon– triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS. PMID:20090182

  8. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  9. Radiation source modeling for Monte Carlo based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Garnica Garza, Hector Mauricio

    In this study, we introduce a method to determine the energy spectrum delivered by a medical accelerator. The method relies on both Monte Carlo generated data and experimental measurements, but requires far fewer measurements than current attenuation-based methods, and much less information about the construction of the linear accelerator than full Monte Carlo based estimations, making it easy to perform in a clinical environment. The basic model used in this work makes use of the quantum absorption efficiency concept, which gives the probability that a photon of energy hn will deposit energy in a detector (film-screen detector in our case). Mathematically, our model is given by: M=Y0T dYhn dhn Eavghne hndhn where M is the absorbed energy in the film-screen detector, dYhn dhn is the photon spectrum, Eavghn is the average energy deposited per interacting photon, and ehn is the quantum absorption efficiency, and Y is the total photon fluence striking the detector. ehn and Eavghn were calculated by means of Monte Carlo simulation using the code MCNPX. The method works as follows: first, the primary photon fluence exiting the target is calculated from first principles by dividing the target into thin slabs (50--100mum) and adding the bremsstrahlung contribution from each slab. The electron fluence is calculated using the Phase Space Time Evolution Model, first proposed by Cordaro et al. and further refined by Huizenga et al. Ray tracing is used to attenuate the primary photon fluence as it passes through the flattening filter on its way to the detectors. Based on a detailed study of linear accelerator head scatter and of the known weaknesses of the Schiff cross-section we propose a multiplicative, energy-dependent empirical correction factor fa,hn=exp ahn to take into account the head scatter energy fluence, where a is a free parameter that is fixed by comparing the energy deposited in a screen-film detector irradiated by the spectrum in question to the theoretical

  10. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  11. Quantitative Monte Carlo-based holmium-166 SPECT reconstruction

    SciTech Connect

    Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.

    2013-11-15

    Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80

  12. Parameterizations for shielding electron accelerators based on Monte Carlo studies

    SciTech Connect

    P. Degtyarenko; G. Stapleton

    1996-10-01

    Numerous recipes for designing lateral slab neutron shielding for electron accelerators are available and each generally produces rather similar results for shield thicknesses of about 2 m of concrete and for electron beams with energy in the 1 to 10 GeV region. For thinner or much thicker shielding the results tend to diverge and the standard recipes require modification. Likewise for geometries other than lateral to the beam direction further corrections are required so that calculated results are less reliable and hence additional and costly conservatism is needed. With the adoption of Monte Carlo (MC) methods of transporting particles a much more powerful way of calculating radiation dose rates outside shielding becomes available. This method is not constrained by geometry, although deep penetration problems need special statistical treatment, and is an excellent approach to solving any radiation transport problem providing the method has been properly checked against measurements and is free from the well known errors common to such computer methods. This present paper utilizes the results of MC calculations based on a nuclear fragmentation model named DINREG using the MC transport code GEANT and models them with the normal two parameter shielding expressions. Because the parameters can change with electron beam energy, angle to the electron beam direction and target material, the parameters are expressed as functions of some of these variables to provide a universal equations for shielding electron beams which can used rather simply for deep penetration problems in simple geometry without the time consuming computations needed in the original MC programs. A particular problem with using simple parameterizations based on the uncollided flux is that approximations based on spherical geometry might not apply to the more common cylindrical cases used for accelerator shielding. This source of error has been discussed at length by Stevenson and others. To study

  13. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  14. CT based 3D Monte Carlo radiation therapy treatment planning.

    PubMed

    Wallace, S; Allen, B J

    1998-06-01

    This paper outlines the "voxel reconstruction" technique used to model the macroscopic human anatomy of the cranial, abdominal and cervical regions directly from CT scans. Tissue composition, density, and radiation transport characteristics were assigned to each individual volume element (voxel) automatically depending on its greyscale number and physical location. Both external beam and brachytherapy treatment techniques were simulated using the Monte Carlo radiation transport code MCNP (Monte Carlo N-Particle) version 3A. To obtain a high resolution dose calculation, yet not overly extend computational times, variable voxel sizes have been introduced. In regions of interest where high attention to anatomical detail and dose calculation was required, the voxel dimensions were reduced to a few millimetres. In less important regions that only influence the region of interest via scattered radiation, the voxel dimensions were increased to the scale of centimetres. With the use of relatively old (1991) supercomputing hardware, dose calculations were performed in under 10 hours to a standard deviation of 5% in each voxel with a resolution of a few millimetres--current hardware should substantially improve these figures. It is envisaged that with coupled photon/electron transport incorporated into MCNP version 4A and 4B, conventional photon and electron treatment planning will be undertaken using this technique, in addition to neutron and associated photon dosimetry presented here. PMID:9745789

  15. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  16. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  17. Image based Monte Carlo Modeling for Computational Phantom

    NASA Astrophysics Data System (ADS)

    Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican

    2014-06-01

    The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.

  18. Monte Carlo-based simulation of dynamic jaws tomotherapy

    SciTech Connect

    Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.

    2011-09-15

    Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis

  19. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  20. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

  1. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    NASA Astrophysics Data System (ADS)

    He, Tongming Tony

    In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based

  2. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  3. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  4. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  5. The effect of statistical uncertainty on inverse treatment planning based on Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Keall, Paul

    2000-12-01

    The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.

  6. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  7. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  8. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  9. Mesh-based Monte Carlo method in time-domain widefield fluorescence molecular tomography

    PubMed Central

    Chen, Jin; Fang, Qianqian

    2012-01-01

    Abstract. We evaluated the potential of mesh-based Monte Carlo (MC) method for widefield time-gated fluorescence molecular tomography, aiming to improve accuracy in both shape discretization and photon transport modeling in preclinical settings. An optimized software platform was developed utilizing multithreading and distributed parallel computing to achieve efficient calculation. We validated the proposed algorithm and software by both simulations and in vivo studies. The results establish that the optimized mesh-based Monte Carlo (mMC) method is a computationally efficient solution for optical tomography studies in terms of both calculation time and memory utilization. The open source code, as part of a new release of mMC, is publicly available at http://mcx.sourceforge.net/mmc/. PMID:23224008

  10. Compressible generalized hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.

    2014-05-01

    One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.

  11. Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements

    SciTech Connect

    Chandler, David; Maldonado, G Ivan; Primm, Trent

    2009-11-01

    Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium

  12. Monte Carlo Methods in Materials Science Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor

    2003-01-01

    A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the

  13. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  14. Monte Carlo based voxel phantoms for in vivo internal dosimetry.

    PubMed

    Ros, J M Gómez; Moraleda, M; López, M A; Navarro, T; Navarro, J F

    2007-01-01

    The purpose of this communication is to describe briefly the computer programs developed to generate the MCNP input file corresponding to any segmented tomographic data and its application to the calibration procedures for in vivo internal dosimetry. The method has been applied to the determination of 241Am in bone by measurement in skull and knee using MCNP voxel models of a real human head and knee based on the tomographic Voxelman and Arms Down phantoms developed by Zubal et al. at Yale University. PMID:17449911

  15. Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.

    2016-03-01

    The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.

  16. Visual improvement for bad handwriting based on Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2014-03-01

    A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.

  17. Monte Carlo model of a polychromatic laboratory based edge illumination x-ray phase contrast system.

    PubMed

    Millard, T P; Endrizzi, M; Diemoz, P C; Hagen, C K; Olivo, A

    2014-05-01

    A Monte Carlo model of a polychromatic laboratory based (coded aperture) edge illumination x-ray phase contrast imaging system has been developed and validated against experimental data. The ability for the simulation framework to be used to model two-dimensional images is also shown. The Monte Carlo model has been developed using the McXtrace engine and is polychromatic, i.e., results are obtained through the use of the full x-ray spectrum rather than an effective energy. This type of simulation can in future be used to model imaging of objects with complex geometry, for system prototyping, as well as providing a first step towards the development of a simulation for modelling dose delivery as a part of translating the imaging technique for use in clinical environments. PMID:24880377

  18. A Markov Chain Monte Carlo Based Method for System Identification

    SciTech Connect

    Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2002-10-22

    This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.

  19. 3-D Monte Carlo-Based Scatter Compensation in Quantitative I-131 SPECT Reconstruction

    PubMed Central

    Dewaraja, Yuni K.; Ljungberg, Michael; Fessler, Jeffrey A.

    2010-01-01

    We have implemented highly accurate Monte Carlo based scatter modeling (MCS) with 3-D ordered subsets expectation maximization (OSEM) reconstruction for I-131 single photon emission computed tomography (SPECT). The scatter is included in the statistical model as an additive term and attenuation and detector response are included in the forward/backprojector. In the present implementation of MCS, a simple multiple window-based estimate is used for the initial iterations and in the later iterations the Monte Carlo estimate is used for several iterations before it is updated. For I-131, MCS was evaluated and compared with triple energy window (TEW) scatter compensation using simulation studies of a mathematical phantom and a clinically realistic voxel-phantom. Even after just two Monte Carlo updates, excellent agreement was found between the MCS estimate and the true scatter distribution. Accuracy and noise of the reconstructed images were superior with MCS compared to TEW. However, the improvement was not large, and in some cases may not justify the large computational requirements of MCS. Furthermore, it was shown that the TEW correction could be improved for most of the targets investigated here by applying a suitably chosen scaling factor to the scatter estimate. Finally clinical application of MCS was demonstrated by applying the method to an I-131 radioimmunotherapy (RIT) patient study. PMID:20104252

  20. High accuracy modeling for advanced nuclear reactor core designs using Monte Carlo based coupled calculations

    NASA Astrophysics Data System (ADS)

    Espel, Federico Puente

    The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods

  1. Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors

    NASA Astrophysics Data System (ADS)

    Kalyvas, N.; Liaparinos, P.

    2014-03-01

    Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.

  2. Programs for calibration-based Monte Carlo simulation of recharge areas.

    PubMed

    Starn, J Jeffrey; Bagtzoglou, Amvrossios C

    2012-01-01

    One use of groundwater flow models is to simulate contributing recharge areas to wells or springs. Particle tracking can be used to simulate these recharge areas, but in many cases the modeler is not sure how accurate these recharge areas are because parameters such as hydraulic conductivity and recharge have errors associated with them. The scripts described in this article (GEN_LHS and MCDRIVER_LHS) use the Python scripting language to run a Monte Carlo simulation with Latin hypercube sampling where model parameters such as hydraulic conductivity and recharge are randomly varied for a large number of model simulations, and the probability of a particle being in the contributing area of a well is calculated based on the results of multiple simulations. Monte Carlo simulation provides one useful measure of the variability in modeled particles. The Monte Carlo method described here is unique in that it uses parameter sets derived from the optimal parameters, their standard deviations, and their correlation matrix, all of which are calculated during nonlinear regression model calibration. In addition, this method uses a set of acceptance criteria to eliminate unrealistic parameter sets. PMID:21967487

  3. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  4. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution. PMID

  5. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  6. X-ray imaging plate performance investigation based on a Monte Carlo simulation tool

    NASA Astrophysics Data System (ADS)

    Yao, M.; Duvauchelle, Ph.; Kaftandjian, V.; Peterzol-Parmentier, A.; Schumm, A.

    2015-01-01

    Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed.

  7. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation.

    PubMed

    Yakimov, Eugene B

    2016-06-01

    An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. PMID:27017084

  8. Monte Carlo-based down-scatter correction of SPECT attenuation maps.

    PubMed

    Bokulić, Tomislav; Vastenhouw, Brendan; de Jong, Hugo W A M; van Dongen, Alice J; van Rijk, Peter P; Beekman, Freek J

    2004-08-01

    Combined acquisition of transmission and emission data in single-photon emission computed tomography (SPECT) can be used for correction of non-uniform photon attenuation. However, down-scatter from a higher energy isotope (e.g. 99mTc) contaminates lower energy transmission data (e.g. 153Gd, 100 keV), resulting in underestimation of reconstructed attenuation coefficients. Window-based corrections are often not very accurate and increase noise in attenuation maps. We have developed a new correction scheme. It uses accurate scatter modelling to avoid noise amplification and does not require additional energy windows. The correction works as follows: Initially, an approximate attenuation map is reconstructed using down-scatter contaminated transmission data (step 1). An emission map is reconstructed based on the contaminated attenuation map (step 2). Based on this approximate 99mTc reconstruction and attenuation map, down-scatter in the 153Gd window is simulated using accelerated Monte Carlo simulation (step 3). This down-scatter estimate is used during reconstruction of a corrected attenuation map (step 4). Based on the corrected attenuation map, an improved 99mTc image is reconstructed (step 5). Steps 3-5 are repeated to incrementally improve the down-scatter estimate. The Monte Carlo simulator provides accurate down-scatter estimation with significantly less noise than down-scatter estimates acquired in an additional window. Errors in the reconstructed attenuation coefficients are reduced from ca. 40% to less than 5%. Furthermore, artefacts in 99mTc emission reconstructions are almost completely removed. These results are better than for window-based correction, both in simulation experiments and in physical phantom experiments. Monte Carlo down-scatter simulation in concert with statistical reconstruction provides accurate down-scatter correction of attenuation maps. PMID:15034678

  9. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  10. Variance reduction for Fokker-Planck based particle Monte Carlo schemes

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Andric, Nemanja; Jenny, Patrick

    2015-08-01

    Recently, Fokker-Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1-3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker-Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker-Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  11. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    SciTech Connect

    Gorji, M. Hossein Andric, Nemanja; Jenny, Patrick

    2015-08-15

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied. Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.

  12. Monte Carlo based calibration of an air monitoring system for gamma and beta+ radiation.

    PubMed

    Sarnelli, A; Negrini, M; D'Errico, V; Bianchini, D; Strigari, L; Mezzenga, E; Menghi, E; Marcocci, F; Benassi, M

    2015-11-01

    Marinelli beaker systems are used to monitor the activity of radioactive samples. These systems are usually calibrated with water solutions and the determination of the activity in gases requires correction coefficients accounting for the different mass-thickness of the sample. For beta+ radionuclides the different distribution of the positrons annihilation points should be also considered. In this work a Monte Carlo simulation based on Geant4 is used to compute correction coefficients for the measurement of the activity of air samples. PMID:26356044

  13. Quantum Monte Carlo simulation of resonant tunneling diodes based on the Wigner distribution function formalism

    NASA Astrophysics Data System (ADS)

    García-García, J.; Martín, F.; Oriols, X.; Suñé, J.

    1998-12-01

    A tool for the simulation of resonant tunneling diodes (RTDs) has been developed. This is based on the solution of the quantum Liouville equation in the active region of the device and the Boltzman transport equation in the regions adjacent to the contacts by means of a Monte Carlo algorithm. By accurately coupling both approaches to current transport, we have developed a quantum simulation tool that allows the use of simulation domains much larger and realistic than those previously considered, without a significant increase in computational burden. The main characteristics expected for the considered devices are clearly obtained, thus supporting the validity of our tool for the simulation of RTDs.

  14. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  15. Channel capacity study of underwater wireless optical communications links based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Jing; Ma, Yong; Zhou, Qunqun; Zhou, Bo; Wang, Hongyuan

    2012-01-01

    Channel capacity of ocean water is limited by propagation distance and optical properties. Previous studies on this problem are based on water-tank experiments with different amounts of Maalox antacid. However, propagation distance is limited by the experimental set-up and the optical properties are different from ocean water. Therefore, the experiment result is not accurate for the physical design of underwater wireless communications links. This letter developed a Monte Carlo model to study channel capacity of underwater optical communications. Moreover, this model can flexibly configure various parameters of transmitter, receiver and channel, and is suitable for physical underwater optical communications links design.

  16. PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation.

    PubMed

    España, S; Herraiz, J L; Vicente, E; Vaquero, J J; Desco, M; Udias, J M

    2009-03-21

    Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones. PMID:19242053

  17. Monte Carlo simulation of a compact microbeam radiotherapy system based on carbon nanotube field emission technology

    PubMed Central

    Schreiber, Eric C.; Chang, Sha X.

    2012-01-01

    Purpose: Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Methods: Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 μm at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Results: Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy/min/A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 μm. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 μm. Conclusions: Monte Carlo simulations demonstrate

  18. Monte Carlo simulation of a compact microbeam radiotherapy system based on carbon nanotube field emission technology

    SciTech Connect

    Schreiber, Eric C.; Chang, Sha X.

    2012-08-15

    Purpose: Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Methods: Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 {mu}m at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Results: Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy/min/A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 {mu}m. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 {mu}m. Conclusions: Monte Carlo simulations

  19. An analysis method for evaluating gradient-index fibers based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

    2011-05-01

    We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

  20. Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.

    PubMed

    Demol, Benjamin; Viard, Romain; Reynaert, Nick

    2015-01-01

    The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using

  1. Stochastic modeling of polarized light scattering using a Monte Carlo based stencil method.

    PubMed

    Sormaz, Milos; Stamm, Tobias; Jenny, Patrick

    2010-05-01

    This paper deals with an efficient and accurate simulation algorithm to solve the vector Boltzmann equation for polarized light transport in scattering media. The approach is based on a stencil method, which was previously developed for unpolarized light scattering and proved to be much more efficient (speedup factors of up to 10 were reported) than the classical Monte Carlo while being equally accurate. To validate what we believe to be the new stencil method, a substrate composed of spherical non-absorbing particles embedded in a non-absorbing medium was considered. The corresponding single scattering Mueller matrix, which is required to model scattering of polarized light, was determined based on the Lorenz-Mie theory. From simulations of a reflected polarized laser beam, the Mueller matrix of the substrate was computed and compared with an established reference. The agreement is excellent, and it could be demonstrated that a significant speedup of the simulations is achieved due to the stencil approach compared with the classical Monte Carlo. PMID:20448777

  2. Monte Carlo simulations for external neutron dosimetry based on the visible Chinese human phantom.

    PubMed

    Zhang, Guozhi; Liu, Qian; Luo, Qingming

    2007-12-21

    A group of Monte Carlo simulations has been performed for external neutron dosimetry calculation based on a whole-body anatomical model, the visible Chinese human (VCH) phantom, which was newly developed from high-resolution cryosectional color photographic images of a healthy Chinese adult male cadaver. Physical characteristics of the VCH computational phantom that consists of 230 x 120 x 892 voxels corresponding to an element volume of 2 x 2 x 2 mm(3) are evaluated through comparison against a variety of other anthropomorphic models. Organ-absorbed doses and the effective doses for monoenergic neutron beams ranging from 10(-9) MeV to 10 GeV under six idealized irradiation geometries (AP, PA, LLAT, RLAT, ROT and ISO) were calculated using the Monte Carlo code MCNPX2.5. Absorbed dose results for selected organs and the effective doses are presented in the form of tables. Dose results are also compared with currently available neutron data form ICRP Publication 74 and those of VIP-Man. Anatomical variations between different models, as well as their influence on dose distributions, are explored. Detailed information derived from the VCH phantom is able to lend quantitative references to the widespread application of human computational models in radiology. PMID:18065844

  3. Monte Carlo simulations for external neutron dosimetry based on the visible Chinese human phantom

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Liu, Qian; Luo, Qingming

    2007-12-01

    A group of Monte Carlo simulations has been performed for external neutron dosimetry calculation based on a whole-body anatomical model, the visible Chinese human (VCH) phantom, which was newly developed from high-resolution cryosectional color photographic images of a healthy Chinese adult male cadaver. Physical characteristics of the VCH computational phantom that consists of 230 × 120 × 892 voxels corresponding to an element volume of 2 × 2 × 2 mm3 are evaluated through comparison against a variety of other anthropomorphic models. Organ-absorbed doses and the effective doses for monoenergic neutron beams ranging from 10-9 MeV to 10 GeV under six idealized irradiation geometries (AP, PA, LLAT, RLAT, ROT and ISO) were calculated using the Monte Carlo code MCNPX2.5. Absorbed dose results for selected organs and the effective doses are presented in the form of tables. Dose results are also compared with currently available neutron data form ICRP Publication 74 and those of VIP-Man. Anatomical variations between different models, as well as their influence on dose distributions, are explored. Detailed information derived from the VCH phantom is able to lend quantitative references to the widespread application of human computational models in radiology.

  4. A Strategy for Finding the Optimal Scale of Plant Core Collection Based on Monte Carlo Simulation

    PubMed Central

    Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin

    2014-01-01

    Core collection is an ideal resource for genome-wide association studies (GWAS). A subcore collection is a subset of a core collection. A strategy was proposed for finding the optimal sampling percentage on plant subcore collection based on Monte Carlo simulation. A cotton germplasm group of 168 accessions with 20 quantitative traits was used to construct subcore collections. Mixed linear model approach was used to eliminate environment effect and GE (genotype × environment) effect. Least distance stepwise sampling (LDSS) method combining 6 commonly used genetic distances and unweighted pair-group average (UPGMA) cluster method was adopted to construct subcore collections. Homogeneous population assessing method was adopted to assess the validity of 7 evaluating parameters of subcore collection. Monte Carlo simulation was conducted on the sampling percentage, the number of traits, and the evaluating parameters. A new method for “distilling free-form natural laws from experimental data” was adopted to find the best formula to determine the optimal sampling percentages. The results showed that coincidence rate of range (CR) was the most valid evaluating parameter and was suitable to serve as a threshold to find the optimal sampling percentage. The principal component analysis showed that subcore collections constructed by the optimal sampling percentages calculated by present strategy were well representative. PMID:24574893

  5. A strategy for finding the optimal scale of plant core collection based on Monte Carlo simulation.

    PubMed

    Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin

    2014-01-01

    Core collection is an ideal resource for genome-wide association studies (GWAS). A subcore collection is a subset of a core collection. A strategy was proposed for finding the optimal sampling percentage on plant subcore collection based on Monte Carlo simulation. A cotton germplasm group of 168 accessions with 20 quantitative traits was used to construct subcore collections. Mixed linear model approach was used to eliminate environment effect and GE (genotype × environment) effect. Least distance stepwise sampling (LDSS) method combining 6 commonly used genetic distances and unweighted pair-group average (UPGMA) cluster method was adopted to construct subcore collections. Homogeneous population assessing method was adopted to assess the validity of 7 evaluating parameters of subcore collection. Monte Carlo simulation was conducted on the sampling percentage, the number of traits, and the evaluating parameters. A new method for "distilling free-form natural laws from experimental data" was adopted to find the best formula to determine the optimal sampling percentages. The results showed that coincidence rate of range (CR) was the most valid evaluating parameter and was suitable to serve as a threshold to find the optimal sampling percentage. The principal component analysis showed that subcore collections constructed by the optimal sampling percentages calculated by present strategy were well representative. PMID:24574893

  6. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    PubMed Central

    Jabbari, Keyvan; Seuntjens, Jan

    2014-01-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994

  7. Review of dynamical models for external dose calculations based on Monte Carlo simulations in urbanised areas.

    PubMed

    Eged, Katalin; Kis, Zoltán; Voigt, Gabriele

    2006-01-01

    After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the "location factor method" and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison. PMID:16095771

  8. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    NASA Astrophysics Data System (ADS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  9. Adaptation of GEANT4 to Monte Carlo dose calculations based on CT data.

    PubMed

    Jiang, H; Paganetti, H

    2004-10-01

    The GEANT4 Monte Carlo code provides many powerful functions for conducting particle transport simulations with great reliability and flexibility. However, as a general purpose Monte Carlo code, not all the functions were specifically designed and fully optimized for applications in radiation therapy. One of the primary issues is the computational efficiency, which is especially critical when patient CT data have to be imported into the simulation model. In this paper we summarize the relevant aspects of the GEANT4 tracking and geometry algorithms and introduce our work on using the code to conduct dose calculations based on CT data. The emphasis is focused on modifications of the GEANT4 source code to meet the requirements for fast dose calculations. The major features include a quick voxel search algorithm, fast volume optimization, and the dynamic assignment of material density. These features are ready to be used for tracking the primary types of particles employed in radiation therapy such as photons, electrons, and heavy charged particles. Recalculation of a proton therapy treatment plan generated by a commercial treatment planning program for a paranasal sinus case is presented as an example. PMID:15543788

  10. Fission yield calculation using toy model based on Monte Carlo simulation

    SciTech Connect

    Jubaidah; Kurniadi, Rizal

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  11. Generalized mesh-based Monte Carlo for wide-field illumination and detection via mesh retessellation

    PubMed Central

    Yao, Ruoyang; Intes, Xavier; Fang, Qianqian

    2015-01-01

    Monte Carlo methods are commonly used as the gold standard in modeling photon transport through turbid media. With the rapid development of structured light applications, an accurate and efficient method capable of simulating arbitrary illumination patterns and complex detection schemes over large surface area is in great need. Here we report a generalized mesh-based Monte Carlo algorithm to support a variety of wide-field illumination methods, including spatial-frequency-domain imaging (SFDI) patterns and arbitrary 2-D patterns. The extended algorithm can also model wide-field detectors such as a free-space CCD camera. The significantly enhanced flexibility of source and detector modeling is achieved via a fast mesh retessellation process that combines the target domain and the source/detector space in a single tetrahedral mesh. Both simulations of complex domains and comparisons with phantom measurements are included to demonstrate the flexibility, efficiency and accuracy of the extended algorithm. Our updated open-source software is provided at http://mcx.space/mmc. PMID:26819826

  12. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  13. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  14. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.

    2013-06-01

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  15. GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.

    PubMed

    Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B

    2013-06-21

    A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm

  16. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  17. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems.

    PubMed

    Ma, Xiaoyao; Hall, Randall W; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-01

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem. PMID:26747795

  18. A Markov-Chain Monte-Carlo Based Method for Flaw Detection in Beams

    SciTech Connect

    Glaser, R E; Lee, C L; Nitao, J J; Hickling, T L; Hanley, W G

    2006-09-28

    A Bayesian inference methodology using a Markov Chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.

  19. Integrated layout based Monte-Carlo simulation for design arc optimization

    NASA Astrophysics Data System (ADS)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  20. Monte Carlo-based dose calculation engine for minibeam radiation therapy.

    PubMed

    Martínez-Rovira, I; Sempau, J; Prezado, Y

    2014-02-01

    Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. PMID:23597423

  1. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-01

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  2. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications.

    PubMed

    Lemaréchal, Yannick; Bert, Julien; Falconnet, Claire; Després, Philippe; Valeri, Antoine; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris

    2015-07-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled (125)I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10(-6) simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. PMID:26061230

  3. GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications

    NASA Astrophysics Data System (ADS)

    Lemaréchal, Yannick; Bert, Julien; Falconnet, Claire; Després, Philippe; Valeri, Antoine; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris

    2015-07-01

    In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400  × 250  × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10-6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications.

  4. Monte Carlo simulation of non-invasive glucose measurement based on FMCW LIDAR

    NASA Astrophysics Data System (ADS)

    Xiong, Bing; Wei, Wenxiong; Liu, Nan; He, Jian-Jun

    2010-11-01

    Continuous non-invasive glucose monitoring is a powerful tool for the treatment and management of diabetes. A glucose measurement method, with the potential advantage of miniaturizability with no moving parts, based on the frequency modulated continuous wave (FMCW) LIDAR technology is proposed and investigated. The system mainly consists of an integrated near-infrared tunable semiconductor laser and a detector, using heterodyne technology to convert the signal from time-domain to frequency-domain. To investigate the feasibility of the method, Monte Carlo simulations have been performed on tissue phantoms with optical parameters similar to those of human interstitial fluid. The simulation showed that the sensitivity of the FMCW LIDAR system to glucose concentration can reach 0.2mM. Our analysis suggests that the FMCW LIDAR technique has good potential for noninvasive blood glucose monitoring.

  5. Bohm trajectories for the Monte Carlo simulation of quantum-based devices

    NASA Astrophysics Data System (ADS)

    Oriols, X.; García-García, J. J.; Martín, F.; Suñé, J.; González, T.; Mateos, J.; Pardo, D.

    1998-02-01

    A generalization of the classical ensemble Monte Carlo (MC) device simulation technique is proposed to simultaneously deal with quantum-mechanical phase-coherence effects and scattering interactions in quantum-based devices. The proposed method restricts the quantum treatment of transport to the regions of the device where the potential profile significantly changes in distances of the order of the de Broglie wavelength of the carriers (the quantum window). Bohm trajectories associated to time-dependent Gaussian wave packets are used to simulate the electron transport in the quantum window. Outside this window, the classical ensemble MC simulation technique is used. Classical and quantum trajectories are smoothly matched at the boundaries of the quantum window according to a criterium of total-energy conservation. A self-consistent one-dimensional simulator for resonant tunneling diodes has been developed to demonstrate the feasibility of our proposal.

  6. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  7. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  8. Auxiliary-field based trial wave functions in quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Chen; Rubenstein, Brenda; Morales, Miguel

    We propose a simple scheme for generating correlated multi-determinant trial wave functions for quantum Monte Carlo algorithms. The method is based on the Hubbard-Stratonovich transformation which decouples a two-body Jastrow-type correlator into one-body projectors coupled to auxiliary fields. We apply the technique to generate stochastic representations of the Gutzwiller wave function, and present benchmark resuts for the ground state energy of the Hubbard model in one dimension. Extensions of the proposed scheme to chemical systems will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, 15-ERD-013.

  9. EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.

  10. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    NASA Astrophysics Data System (ADS)

    Merheb, C.; Petegnief, Y.; Talbot, J. N.

    2007-02-01

    within 9%. For a 410-665 keV energy window, the measured sensitivity for a centred point source was 1.53% and mouse and rat scatter fractions were respectively 12.0% and 18.3%. The scattered photons produced outside the rat and mouse phantoms contributed to 24% and 36% of total simulated scattered coincidences. Simulated and measured single and prompt count rates agreed well for activities up to the electronic saturation at 110 MBq for the mouse and rat phantoms. Volumetric spatial resolution was 17.6 µL at the centre of the FOV with differences less than 6% between experimental and simulated spatial resolution values. The comprehensive evaluation of the Monte Carlo modelling of the Mosaic™ system demonstrates that the GATE package is adequately versatile and appropriate to accurately describe the response of an Anger logic based animal PET system.

  11. Modeling of an industrial environment: external dose calculations based on Monte Carlo simulations of photon transport.

    PubMed

    Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz

    2004-02-01

    External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y

  12. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  13. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  14. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  15. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    SciTech Connect

    Giuseppe Palmiotti

    2015-05-01

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  16. Study on in situ calibration for neutron flux monitor in the Large Helical Device based on Monte Carlo calculations

    SciTech Connect

    Nakano, Y. Yamazaki, A.; Watanabe, K.; Uritani, A.; Ogawa, K.; Isobe, M.

    2014-11-15

    Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.

  17. Acceleration of Monte Carlo Criticality Calculations Using Deterministic-Based Starting Sources

    SciTech Connect

    Ibrahim, A.; Peplow, Douglas E.; Wagner, John C; Mosher, Scott W; Evans, Thomas M

    2012-01-01

    A new automatic approach that uses approximate deterministic solutions for providing the starting fission source for Monte Carlo eigenvalue calculations was evaluated in this analysis. By accelerating the Monte Carlo source convergence and decreasing the number of cycles that has to be skipped before the tallies estimation, this approach was found to increase the efficiency of the overall simulation, even with the inclusion of the extra computational time required by the deterministic calculation. This approach was also found to increase the reliability of the Monte Carlo criticality calculations of loosely coupled systems because the use of the better starting source reduces the likelihood of producing an undersampled k{sub eff} due to the inadequate source convergence. The efficiency improvement was demonstrated using two of the standard test problems devised by the OECD/NEA Expert Group on Source Convergence in Criticality-Safety Analysis to measure the source convergence in Monte Carlo criticality calculations. For a fixed uncertainty objective, this approach increased the efficiency of the overall simulation by factors between 1.2 and 3 depending on the difficulty of the source convergence in these problems. The reliability improvement was demonstrated in a modified version of the 'k{sub eff} of the world' problem that was specifically designed to demonstrate the limitations of the current Monte Carlo power iteration techniques. For this problem, the probability of obtaining a clearly undersampled k{sub eff} decreased from 5% with a uniform starting source to zero with a deterministic starting source when batch sizes with more than 15,000 neutron/cycle were used.

  18. Monte Carlo dose computation for IMRT optimization*

    NASA Astrophysics Data System (ADS)

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  19. Monte-Carlo characterization of a miniature source of characteristic X rays based on an implantable needle

    SciTech Connect

    Safronov, V. V.; Sozontov, E. A.; Gutman, G.

    2013-05-15

    A new concept of an X-ray brachytherapy setup based on the use of fluorescence from a secondary target placed at the tip of an implantable needle is proposed. Spatial dose-rate distributions for four combinations of secondary target materials and shapes are calculated by the Monte-Carlo method.

  20. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  1. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  2. A Monte Carlo exploration of threefold base geometries for 4d F-theory vacua

    NASA Astrophysics Data System (ADS)

    Taylor, Washington; Wang, Yi-Nan

    2016-01-01

    We use Monte Carlo methods to explore the set of toric threefold bases that support elliptic Calabi-Yau fourfolds for F-theory compactifications to four dimensions, and study the distribution of geometrically non-Higgsable gauge groups, matter, and quiver structure. We estimate the number of distinct threefold bases in the connected set studied to be ˜ 1048. The distribution of bases peaks around h 1,1 ˜ 82. All bases encountered after "thermalization" have some geometric non-Higgsable structure. We find that the number of non-Higgsable gauge group factors grows roughly linearly in h 1,1 of the threefold base. Typical bases have ˜ 6 isolated gauge factors as well as several larger connected clusters of gauge factors with jointly charged matter. Approximately 76% of the bases sampled contain connected two-factor gauge group products of the form SU(3) × SU(2), which may act as the non-Abelian part of the standard model gauge group. SU(3) × SU(2) is the third most common connected two-factor product group, following SU(2) × SU(2) and G 2 × SU(2), which arise more frequently.

  3. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, S.C.

    1998-12-01

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed. {copyright} {ital 1998 American Institute of Physics.}

  4. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, Steven C.

    1998-12-21

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  5. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.

    1998-08-25

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H, {sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  6. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  7. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  8. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  9. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  10. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    NASA Astrophysics Data System (ADS)

    Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  11. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  12. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  13. Convolution-Based Forced Detection Monte Carlo Simulation Incorporating Septal Penetration Modeling

    PubMed Central

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2010-01-01

    In SPECT imaging, photon transport effects such as scatter, attenuation and septal penetration can negatively affect the quality of the reconstructed image and the accuracy of quantitation estimation. As such, it is useful to model these effects as carefully as possible during the image reconstruction process. Many of these effects can be included in Monte Carlo (MC) based image reconstruction using convolution-based forced detection (CFD). With CFD Monte Carlo (CFD-MC), often only the geometric response of the collimator is modeled, thereby making the assumption that the collimator materials are thick enough to completely absorb photons. However, in order to retain high collimator sensitivity and high spatial resolution, it is required that the septa be as thin as possible, thus resulting in a significant amount of septal penetration for high energy radionuclides. A method for modeling the effects of both collimator septal penetration and geometric response using ray tracing (RT) techniques has been performed and included into a CFD-MC program. Two look-up tables are pre-calculated based on the specific collimator parameters and radionuclides, and subsequently incorporated into the SIMIND MC program. One table consists of the cumulative septal thickness between any point on the collimator and the center location of the collimator. The other table presents the resultant collimator response for a point source at different distances from the collimator and for various energies. A series of RT simulations have been compared to experimental data for different radionuclides and collimators. Results of the RT technique matches experimental data of collimator response very well, producing correlation coefficients higher than 0.995. Reasonable values of the parameters in the lookup table and computation speed are discussed in order to achieve high accuracy while using minimal storage space for the look-up tables. In order to achieve noise-free projection images from MC, it

  14. A Monte Carlo based three-dimensional dose reconstruction method derived from portal dose images

    SciTech Connect

    Elmpt, Wouter J. C. van; Nijsten, Sebastiaan M. J. J. G.; Schiffeleers, Robert F. H.; Dekker, Andre L. A. J.; Mijnheer, Ben J.; Lambin, Philippe; Minken, Andre W. H.

    2006-07-15

    The verification of intensity-modulated radiation therapy (IMRT) is necessary for adequate quality control of the treatment. Pretreatment verification may trace the possible differences between the planned dose and the actual dose delivered to the patient. To estimate the impact of differences between planned and delivered photon beams, a three-dimensional (3-D) dose verification method has been developed that reconstructs the dose inside a phantom. The pretreatment procedure is based on portal dose images measured with an electronic portal imaging device (EPID) of the separate beams, without the phantom in the beam and a 3-D dose calculation engine based on the Monte Carlo calculation. Measured gray scale portal images are converted into portal dose images. From these images the lateral scattered dose in the EPID is subtracted and the image is converted into energy fluence. Subsequently, a phase-space distribution is sampled from the energy fluence and a 3-D dose calculation in a phantom is started based on a Monte Carlo dose engine. The reconstruction model is compared to film and ionization chamber measurements for various field sizes. The reconstruction algorithm is also tested for an IMRT plan using 10 MV photons delivered to a phantom and measured using films at several depths in the phantom. Depth dose curves for both 6 and 10 MV photons are reconstructed with a maximum error generally smaller than 1% at depths larger than the buildup region, and smaller than 2% for the off-axis profiles, excluding the penumbra region. The absolute dose values are reconstructed to within 1.5% for square field sizes ranging from 5 to 20 cm width. For the IMRT plan, the dose was reconstructed and compared to the dose distribution with film using the gamma evaluation, with a 3% and 3 mm criterion. 99% of the pixels inside the irradiated field had a gamma value smaller than one. The absolute dose at the isocenter agreed to within 1% with the dose measured with an ionization

  15. Tetrahedral-mesh-based computational human phantom for fast Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Yeom, Yeon Soo; Jeong, Jong Hwi; Han, Min Cheol; Kim, Chan Hyeong

    2014-06-01

    Although polygonal-surface computational human phantoms can address several critical limitations of conventional voxel phantoms, their Monte Carlo simulation speeds are much slower than those of voxel phantoms. In this study, we sought to overcome this problem by developing a new type of computational human phantom, a tetrahedral mesh phantom, by converting a polygonal surface phantom to a tetrahedral mesh geometry. The constructed phantom was implemented in the Geant4 Monte Carlo code to calculate organ doses as well as to measure computation speed, the values were then compared with those for the original polygonal surface phantom. It was found that using the tetrahedral mesh phantom significantly improved the computation speed by factors of between 150 and 832 considering all of the particles and simulated energies other than the low-energy neutrons (0.01 and 1 MeV), for which the improvement was less significant (17.2 and 8.8 times, respectively).

  16. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  17. A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chen, Yajun; Liu, Ding; Liang, Junli

    2013-10-01

    During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.

  18. IR imaging simulation and analysis for aeroengine exhaust system based on reverse Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Chen, Shiguo; Chen, Lihai; Mo, Dongla; Shi, Jingcheng

    2014-11-01

    The IR radiation characteristics of aeroengine are the important basis for IR stealth design and anti-stealth detection of aircraft. With the development of IR imaging sensor technology, the importance of aircraft IR stealth increases. An effort is presented to explore target IR radiation imaging simulation based on Reverse Monte Carlo Method (RMCM), which combined with the commercial CFD software. Flow and IR radiation characteristics of an aeroengine exhaust system are investigated, which developing a full size geometry model based on the actual parameters, using a flow-IR integration structured mesh, obtaining the engine performance parameters as the inlet boundary conditions of mixer section, and constructing a numerical simulation model of engine exhaust system of IR radiation characteristics based on RMCM. With the above models, IR radiation characteristics of aeroengine exhaust system is given, and focuses on the typical detecting band of IR spectral radiance imaging at azimuth 20°. The result shows that: (1) in small azimuth angle, the IR radiation is mainly from the center cone of all hot parts; near the azimuth 15°, mixer has the biggest radiation contribution, while center cone, turbine and flame stabilizer equivalent; (2) the main radiation components and space distribution in different spectrum is different, CO2 at 4.18, 4.33 and 4.45 micron absorption and emission obviously, H2O at 3.0 and 5.0 micron absorption and emission obviously.

  19. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  20. Monte Carlo based treatment planning systems for Boron Neutron Capture Therapy in Petten, The Netherlands

    NASA Astrophysics Data System (ADS)

    Nievaart, V. A.; Daquino, G. G.; Moss, R. L.

    2007-06-01

    Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with 10B, a higher dose is given to these cancer cells due to the 10B(n,α)7Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT_rtpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo based TPSs exploited at Petten

  1. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    NASA Astrophysics Data System (ADS)

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  2. Development and validation of MCNPX-based Monte Carlo treatment plan verification system

    PubMed Central

    Jabbari, Iraj; Monadi, Shahram

    2015-01-01

    A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554

  3. Monte Carlo-based dosimetry of head-and-neck patients treated with SIB-IMRT

    SciTech Connect

    Sakthi, Nirmal; Keall, Paul; Mihaylov, Ivaylo; Wu Qiuwen; Wu Yan; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert; Siebers, Jeffrey V. . E-mail: jsiebers@vcu.edu

    2006-03-01

    Purpose: To evaluate the accuracy of previously reported superposition/convolution (SC) dosimetric results by comparing with Monte Carlo (MC) dose calculations for head-and-neck intensity-modulated radiation therapy (IMRT) patients treated with the simultaneous integrated boost technique. Methods and Materials: Thirty-one plans from 24 patients previously treated on a phase I/II head-and-neck squamous cell carcinoma simultaneous integrated boost IMRT protocol were used. Clinical dose distributions, computed with an SC algorithm, were recomputed using an EGS4-based MC algorithm. Phantom-based dosimetry quantified the fluence prediction accuracy of each algorithm. Dose-volume indices were used to compare patient dose distributions. Results and Discussion: The MC algorithm predicts flat-phantom measurements better than the SC algorithm. Average patient dose indices agreed within 2.5% of the local dose for targets; 5.0% for parotids; and 1.9% for cord and brainstem. However, only 1 of 31 plans agreed within 3% for all indices; 4 of 31 agreed within 5%. In terms of the prescription dose, 4 of 31 plans agreed within 3% for all indices, whereas 28 of 31 agreed within 5%. Conclusions: Average SC-computed doses agreed with MC results in the patient geometry; however deviations >5% were common. The fluence modulation prediction is likely the major source of the dose discrepancy. The observed dose deviations can impact dose escalation protocols, because they would result in shifting patients to higher dose levels.

  4. Monte Carlo-based dose calculation for 32P patch source for superficial brachytherapy applications

    PubMed Central

    Sahoo, Sridhar; Palani, Selvam T.; Saxena, S. K.; Babu, D. A. R.; Dash, A.

    2015-01-01

    Skin cancer treatment involving 32P source is an easy, less expensive method of treatment limited to small and superficial lesions of approximately 1 mm deep. Bhabha Atomic Research Centre (BARC) has indigenously developed 32P nafion-based patch source (1 cm × 1 cm) for treating skin cancer. For this source, the values of dose per unit activity at different depths including dose profiles in water are calculated using the EGSnrc-based Monte Carlo code system. For an initial activity of 1 Bq distributed in 1 cm2 surface area of the source, the calculated central axis depth dose values are 3.62 × 10-10 GyBq-1 and 8.41 × 10-11 GyBq-1at 0.0125 and 1 mm depths in water, respectively. Hence, the treatment time calculated for delivering therapeutic dose of 30 Gy at 1 mm depth along the central axis of the source involving 37 MBq activity is about 2.7 hrs. PMID:26150682

  5. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    SciTech Connect

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    2015-01-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  6. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    PubMed Central

    Saha, Krishnendu; Straus, Kenneth J.; Chen, Yu.; Glick, Stephen J.

    2014-01-01

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  7. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  8. A comparison of Monte Carlo and model-based dose calculations in radiotherapy using MCNPTV

    NASA Astrophysics Data System (ADS)

    Wyatt, Mark S.; Miller, Laurence F.

    2006-06-01

    Monte Carlo calculations for megavoltage radiotherapy beams represent the next generation of dose calculation in the clinical environment. In this paper, calculations obtained by the MCNP code based on CT data from a human pelvis are compared against those obtained by a commercial radiotherapy treatment system (CMS XiO). The MCNP calculations are automated by the use of MCNPTV (MCNP Treatment Verification), an integrated application developed in Visual Basic that runs on a Windows-based PC. The linear accelerator beam is modeled as a finite point source, and validated by comparing depth dose curves and lateral profiles in a water phantom to measured data. Calculated water phantom PDDs are within 1% of measured data, but the lateral profiles exhibit differences of 2.4, 5.5, and 5.7 mm at the 60%, 40%, and 20% isodose lines, respectively. A MCNP calculation is performed using the CT data and 15 points are selected for comparison with XiO. Results are generally within the uncertainty of the MCNP calculation, although differences up to 13.2% are seen in the presence of large heterogeneities.

  9. A Monte Carlo-based treatment planning tool for proton therapy

    NASA Astrophysics Data System (ADS)

    Mairani, A.; Böhlen, T. T.; Schiavi, A.; Tessonnier, T.; Molinelli, S.; Brons, S.; Battistoni, G.; Parodi, K.; Patera, V.

    2013-04-01

    In the field of radiotherapy, Monte Carlo (MC) particle transport calculations are recognized for their superior accuracy in predicting dose and fluence distributions in patient geometries compared to analytical algorithms which are generally used for treatment planning due to their shorter execution times. In this work, a newly developed MC-based treatment planning (MCTP) tool for proton therapy is proposed to support treatment planning studies and research applications. It allows for single-field and simultaneous multiple-field optimization in realistic treatment scenarios and is based on the MC code FLUKA. Relative biological effectiveness (RBE)-weighted dose is optimized either with the common approach using a constant RBE of 1.1 or using a variable RBE according to radiobiological input tables. A validated reimplementation of the local effect model was used in this work to generate radiobiological input tables. Examples of treatment plans in water phantoms and in patient-CT geometries together with an experimental dosimetric validation of the plans are presented for clinical treatment parameters as used at the Italian National Center for Oncological Hadron Therapy. To conclude, a versatile MCTP tool for proton therapy was developed and validated for realistic patient treatment scenarios against dosimetric measurements and commercial analytical TP calculations. It is aimed to be used in future for research and to support treatment planning at state-of-the-art ion beam therapy facilities.

  10. The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran

    NASA Astrophysics Data System (ADS)

    Kargaran, Hamed; Minuchehr, Abdolhamid; Zolfaghari, Ahmad

    2016-04-01

    The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.

  11. Status of the Space Radiation Monte Carlos Simulation Based on FLUKA and ROOT

    NASA Technical Reports Server (NTRS)

    Andersen, Victor; Carminati, Federico; Empl, Anton; Ferrari, Alfredo; Pinsky, Lawrence; Sala, Paola; Wilson, Thomas L.

    2002-01-01

    The NASA-funded project reported on at the first IWSSRR in Arona to develop a Monte-Carlo simulation program for use in simulating the space radiation environment based on the FLUKA and ROOT codes is well into its second year of development, and considerable progress has been made. The general tasks required to achieve the final goals include the addition of heavy-ion interactions into the FLUKA code and the provision of a ROOT-based interface to FLUKA. The most significant progress to date includes the incorporation of the DPMJET event generator code within FLUKA to handle heavy-ion interactions for incident projectile energies greater than 3GeV/A. The ongoing effort intends to extend the treatment of these interactions down to 10 MeV, and at present two alternative approaches are being explored. The ROOT interface is being pursued in conjunction with the CERN LHC ALICE software team through an adaptation of their existing AliROOT software. As a check on the validity of the code, a simulation of the recent data taken by the ATIC experiment is underway.

  12. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    PubMed

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  13. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  14. Stationarity Modeling and Informatics-Based Diagnostics in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.

    2005-01-15

    In Monte Carlo criticality calculations, source error propagation through the stationary (active) cycles and source convergence in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels. For symmetric two-fissile-component systems with the DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is more than 1000. When such a system is made slightly asymmetric, the neutron effective multiplication factor at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy has been applied to a slightly asymmetric two-fissile-component problem with a DR of 0.993. The numerical results are mostly satisfactory but also show the possibility of the occasional occurrence of unnecessarily strict stationarity diagnostics. Therefore, a criterion is defined based on the concept of data compression limit in information theory. Numerical results for a pressurized water reactor fuel storage facility with a DR of 0.994 strongly support the efficacy of relative entropy in both the posterior and progressive stationarity diagnostics.

  15. A global reaction route mapping-based kinetic Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Mitchell, Izaac; Irle, Stephan; Page, Alister J.

    2016-07-01

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.

  16. Monte Carlo simulation of novel breast imaging modalities based on coherent x-ray scattering

    NASA Astrophysics Data System (ADS)

    Ghammraoui, Bahaa; Badal, Andreu

    2014-07-01

    We present upgraded versions of MC-GPU and penEasy_Imaging, two open-source Monte Carlo codes for the simulation of radiographic projections and CT, that have been extended and validated to account for the effect of molecular interference in the coherent x-ray scatter. The codes were first validation by comparison between simulated and measured energy dispersive x-ray diffraction (EDXRD) spectra. A second validation was by evaluation of the rejection factor of a focused anti-scatter grid. To exemplify the capabilities of the new codes, the modified MC-GPU code was used to examine the possibility of characterizing breast tissue composition and microcalcifications in a volume of interest inside a whole breast phantom using EDXRD and to simulate a coherent scatter computed tomography (CSCT) system based on first generation CT acquisition geometry. It was confirmed that EDXRD and CSCT have the potential to characterize tissue composition inside a whole breast. The GPU-accelerated code was able to simulate, in just a few hours, a complete CSCT acquisition composed of 9758 independent pencil-beam projections. In summary, it has been shown that the presented software can be used for fast and accurate simulation of novel breast imaging modalities relying on scattering measurements and therefore can assist in the characterization and optimization of promising modalities currently under development.

  17. [Rapid simulation of electrode surface treatment based on Monte-Carlo model].

    PubMed

    Hu, Zhengtian; Xu, Ying; Guo, Miao; Sun, Zhitong; Li, Yan

    2014-12-01

    Micro- and integrated biosensor provides a powerful means for cell electrophysiology research. The technology of electroplating platinum black on the electrode can uprate signal-to-noise ratio and sensitivity of the sensor. For quantifying analysis of the processing method of electroplating process, this paper proposes a grid search algorithm based on the Monte-Carlo model. The paper also puts forward the operational optimization strategy, which can rapidly implement the process of large-scale nanoparticles with different particle size of dispersion (20-200 nm) attac- hing to the electrode and shortening a simulation time from average 20 hours to 0.5 hour when the test number is 10 and electrode radius is 100 microm. When the nanoparticle was in a single layer or multiple layers, the treatment uniformity and attachment rate was analyzed by using the grid search algorithm with different sizes and shapes of electrode. Simulation results showed that under ideal conditions, when the electrode radius is less than 100 /m, with the electrode size increasing, it has an obvious effect for the effective attachment and the homogeneity of nanoparticle, which is advantageous to the quantitative evaluation of electrode array's repeatability. Under the condition of the same electrode area, the best attachment is on the circular electrode compared to the attachments on the square and rectangular ones. PMID:25868260

  18. [Comprehensive Risk Assessment of Soil Heavy Metals Based on Monte Carlo Simulation and Case Study].

    PubMed

    Yang, Yang; Dai, Dan; Cai, Yi-min; Chen, Wei-ping; Hou, Yu; Yang, Feng

    2015-11-01

    Based on the stochastic. theory, the Monte Carlo simulation was introduced in ecological risk assessment and health risk assessment. Together with the multi-statistical technique, the proposed models were used for risk analysis in the Bin-Chang Coal Chemical industry park. The results showed that high levels of Cd, Co, and Cr were found in the area with long time mining. The comprehensive single index and comprehensive risk index showed that the ecological risk of soil metals fell into the poor level, with probabilities of 53.2% and 55.6%, respectively. The health risk caused by hand to mouth ingestion was significantly greater than that by dermal exposure, and Cr was of prime concern for pollution control. Children were taking a major health risk. Their non-cancer risks were maintained at a high level, and 5.0-fold higher than adults under hand to mouth ingestion, and 8.2-fold higher than adults under dermal exposure. The cancer risk for children under these two exposure ways were both above the safety standard suggested by USEPA. PMID:26911013

  19. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  20. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  1. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  2. A global reaction route mapping-based kinetic Monte Carlo algorithm.

    PubMed

    Mitchell, Izaac; Irle, Stephan; Page, Alister J

    2016-07-14

    We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential. PMID:27421395

  3. TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization

    SciTech Connect

    Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X

    2014-06-15

    Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.

  4. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy

    NASA Astrophysics Data System (ADS)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  5. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    PubMed

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  6. Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT

    PubMed Central

    2009-01-01

    Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT) of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies < 2%, while the fsPB dose engine showed discrepancies of up to 8% in the presence of lateral electron disequilibrium in the target. In the patient plan optimization, this translates into violations of organ at risk constraints and unpredictable target doses for the fsPB optimized plans. For the 4D MC recalculated dose distribution, MC optimized plans always underestimate the target doses, but the organ at risk doses were comparable. The results depend on the static patient model, and the smallest discrepancy was found for the MC optimized plan on the density overwrite one phase static CT model. Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe. PMID:20003380

  7. A Monte Carlo simulation based two-stage adaptive resonance theory mapping approach for offshore oil spill vulnerability index classification.

    PubMed

    Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth

    2014-09-15

    In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. PMID:25044043

  8. Monte-Carlo simulation of an ultra small-angle neutron scattering instrument based on Soller slits

    SciTech Connect

    Rieker, T.; Hubbard, P.

    1997-09-01

    Monte Carlo simulations are used to investigate an ultra small-angle neutron scattering instrument for use at a pulsed source based on a Soller slit collimator and analyzer. The simulations show that for a q{sub min} of {approximately}le-4 {angstrom}{sup -1} (15 {angstrom} neutrons) a few tenths of a percent of the incident flux is transmitted through both collimators at q=0.

  9. Monte Carlo-based revised values of dose rate constants at discrete photon energies

    PubMed Central

    Selvam, T. Palani; Shrivastava, Vandana; Chourasiya, Ghanashyam; Babu, D. Appala Raju

    2014-01-01

    Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength Sk needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30–50 keV and up to 4% at 0.2 cm at 30 keV). A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. Sk calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20–50 keV) when compared to the published values. The deviations observed in the values of dose rate and Sk affect the values of dose rate constants up to 3%. PMID:24600166

  10. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP)

    NASA Astrophysics Data System (ADS)

    Bitar, A.; Lisbona, A.; Thedrez, P.; Sai Maurel, C.; LeForestier, D.; Barbet, J.; Bardies, M.

    2007-02-01

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  11. A study of potential numerical pitfalls in GPU-based Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Magnoux, Vincent; Ozell, Benoît; Bonenfant, Éric; Després, Philippe

    2015-07-01

    The purpose of this study was to evaluate the impact of numerical errors caused by the floating point representation of real numbers in a GPU-based Monte Carlo code used for dose calculation in radiation oncology, and to identify situations where this type of error arises. The program used as a benchmark was bGPUMCD. Three tests were performed on the code, which was divided into three functional components: energy accumulation, particle tracking and physical interactions. First, the impact of single-precision calculations was assessed for each functional component. Second, a GPU-specific compilation option that reduces execution time as well as precision was examined. Third, a specific function used for tracking and potentially more sensitive to precision errors was tested by comparing it to a very high-precision implementation. Numerical errors were found in two components of the program. Because of the energy accumulation process, a few voxels surrounding a radiation source end up with a lower computed dose than they should. The tracking system contained a series of operations that abnormally amplify rounding errors in some situations. This resulted in some rare instances (less than 0.1%) of computed distances that are exceedingly far from what they should have been. Most errors detected had no significant effects on the result of a simulation due to its random nature, either because they cancel each other out or because they only affect a small fraction of particles. The results of this work can be extended to other types of GPU-based programs and be used as guidelines to avoid numerical errors on the GPU computing platform.

  12. Seabed radioactivity based on in situ measurements and Monte Carlo simulations.

    PubMed

    Androulakaki, E G; Tsabaris, C; Eleftheriou, G; Kokkoris, M; Patiris, D L; Vlastou, R

    2015-07-01

    Activity concentration measurements were carried out on the seabed, by implementing the underwater detection system KATERINA. The efficiency calibration was performed in the energy range 350-2600 keV, using in situ and laboratory measurements. The efficiency results were reproduced and extended in a broadened range of energies from 150 to 2600 keV, by Monte Carlo simulations, using the MCNP5 code. The concentrations of (40)K, (214)Bi and (208)Tl were determined utilizing the present approach. The results were validated by laboratory measurements. PMID:25846455

  13. Online Monte Carlo based calculator of human skin spectra and color

    NASA Astrophysics Data System (ADS)

    Doronin, A.; Meglinski, I.

    2011-10-01

    We present an object oriented GPU-accelerated Monte Carlo tool for the online simulation of reflectance spectra and color of the human skin in visible and near-infrared (NIR) spectral region. Human skin is represented as multi-layered medium. The variations in spatial distribution of blood, pheomelanin, eumelanin, index of blood oxygen saturation, hematocrit, and volume fraction of water are taken into account. The optical properties of skin tissues and the results of simulation of skin reflectance spectra and corresponding skin colors are presented.

  14. Online Monte Carlo based calculator of human skin spectra and color

    NASA Astrophysics Data System (ADS)

    Doronin, A.; Meglinski, I.

    2012-03-01

    We present an object oriented GPU-accelerated Monte Carlo tool for the online simulation of reflectance spectra and color of the human skin in visible and near-infrared (NIR) spectral region. Human skin is represented as multi-layered medium. The variations in spatial distribution of blood, pheomelanin, eumelanin, index of blood oxygen saturation, hematocrit, and volume fraction of water are taken into account. The optical properties of skin tissues and the results of simulation of skin reflectance spectra and corresponding skin colors are presented.

  15. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  16. First macro Monte Carlo based commercial dose calculation module for electron beam treatment planning—new issues for clinical consideration

    NASA Astrophysics Data System (ADS)

    Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.

    2006-06-01

    The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.

  17. Monte Carlo-based searching as a tool to study carbohydrate structure.

    PubMed

    Dowd, Michael K; Kiely, Donald E; Zhang, Jinsong

    2011-07-01

    A torsion angle-based Monte Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various utilities for evaluating conformers. In its current form, the program operates with several versions of the MM3 and MM4 molecular mechanics programs and has a module to calculate hydrogen-hydrogen coupling constants. The routine was used to study the low-energy exo-cyclic substituents of β-D-glucopyranose and the conformers of D-glucaramide, both of which had been previously studied with MM3 by full conformational searches. For these molecules, the program found all previously reported low-energy structures. The routine was also used to find favorable conformers of 2,3,4,5-tetra-O-acetyl-N,N'-dimethyl-D-glucaramide and D-glucitol, the latter of which is believed to have many low-energy forms. Finally, the technique was used to study the inter-ring conformations of β-gentiobiose, a β-(1→6)-linked disaccharide of D-glucopyranose. The program easily found conformers in the 10 previously identified low-energy regions for this disaccharide. In 6 of the 10 local regions, the same previously identified low-energy structures were found. In the remaining four regions, the search identified structures with slightly lower energies than those previously reported. The approach should be useful for extending modeling studies on acyclic monosaccharides and possibly oligosaccharides. PMID:21536262

  18. GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.

    PubMed

    Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B

    2011-11-21

    Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM. PMID:22016026

  19. Acceptance and commissioning of a treatment planning system based on Monte Carlo calculations.

    PubMed

    Lopez-Tarjuelo, J; Garcia-Molla, R; Juan-Senabre, X J; Quiros-Higueras, J D; Santos-Serra, A; de Marco-Blancas, N; Calzada-Feliu, S

    2014-04-01

    The Monaco Treatment Planning System (TPS), based on a virtual energy fluence model of the photon beam head components of the linac and a dose computation engine made with Monte Carlo (MC) algorithm X-Ray Voxel MC (XVMC), has been tested before being put into clinical use. An Elekta Synergy with 6 MV was characterized using routine equipment. After the machine's model was installed, a set of functionality, geometric, dosimetric and data transfer tests were performed. The dosimetric tests included dose calculations in water, heterogeneous phantoms and Intensity Modulated Radiation Therapy (IMRT) verifications. Data transfer tests were run for every imaging device, TPS and the electronic medical record linked to Monaco. Functionality and geometric tests were run properly. Dose calculations in water were in accordance with measurements so that, in 95% of cases, differences were up to 1.9%. Dose calculation in heterogeneous media showed expected results found in the literature. IMRT verification results with an ionization chamber led to dose differences lower than 2.5% for points inside a standard gradient. When an 2-D array was used, all the fields passed the g (3%, 3 mm) test with a percentage of succeeding points between 90% and 95%, of which the majority of the mentioned fields had a percentage of succeeding points between 95% and 100%. Data transfer caused problems that had to be solved by means of changing our workflow. In general, tests led to satisfactory results. Monaco performance complied with published international recommendations and scored highly in the dosimetric ambit. However, the problems detected when the TPS was put to work together with our current equipment showed that this kind of product must be completely commissioned, without neglecting data workflow, before treating the first patient. PMID:23862746

  20. The use of tetrahedral mesh geometries in Monte Carlo simulation of applicator based brachytherapy dose distributions.

    PubMed

    Fonseca, Gabriel Paiva; Landry, Guillaume; White, Shane; D'Amours, Michel; Yoriyaz, Hélio; Beaulieu, Luc; Reniers, Brigitte; Verhaegen, Frank

    2014-10-01

    Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for (192)Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator. PMID

  1. The use of tetrahedral mesh geometries in Monte Carlo simulation of applicator based brachytherapy dose distributions

    NASA Astrophysics Data System (ADS)

    Paiva Fonseca, Gabriel; Landry, Guillaume; White, Shane; D'Amours, Michel; Yoriyaz, Hélio; Beaulieu, Luc; Reniers, Brigitte; Verhaegen, Frank

    2014-10-01

    Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for 192Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator.

  2. GPU-based fast Monte Carlo dose calculation for proton therapy

    PubMed Central

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B

    2015-01-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6–22 s to simulate 10 million source protons to achieve ~1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID:23128424

  3. Cell death following BNCT: a theoretical approach based on Monte Carlo simulations.

    PubMed

    Ballarini, F; Bakeine, J; Bortolussi, S; Bruschi, P; Cansolino, L; Clerici, A M; Ferrari, C; Protti, N; Stella, S; Zonta, A; Zonta, C; Altieri, S

    2011-12-01

    In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called "lethal aberrations" (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the (10)B(n,α) (7)Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the (14)N(n,p)(14)C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of the University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death. PMID:21481595

  4. GPU-based fast Monte Carlo dose calculation for proton therapy.

    PubMed

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy. PMID

  5. TH-C-17A-08: Monte Carlo Based Design of Efficient Scintillating Fiber Dosimeters

    SciTech Connect

    Wiles, A; Loyalka, S; Rangaraj, D; Izaguirre, E

    2014-06-15

    Purpose: To accurately predict Cherenkov radiation generation in scintillating fiber dosimeters. Quantifying Cherenkov radiation provides a method for optimizing fiber dimensions, orientation, optical filters, and photodiode spectral sensitivity to achieve efficient real time imaging dosimeter designs. Methods: We develop in-house Monte Carlo simulation software to model polymer scintillation fibers' fluorescence and Cherenkov emission in megavoltage clinical beams. The model computes emissions using generation probabilities, wavelength sampling, fiber photon capture, and fiber transport efficiency and incorporates the fiber's index of refraction, optical attenuation in the Cherenkov and visible spectrum and fiber dimensions. Detector component selection based on parameters such as silicon photomultiplier efficiency and optical coupling filters separates Cherenkov radiation from the dose-proportional scintillating emissions. The computation uses spectral and geometrical separation of Cherenkov radiation, however other filtering techniques can expand the model. Results: We compute Cherenkov generation per electron and fiber capture and transmission of those photons toward the detector with incident electron beam angle dependence. The model accounts for beam obliquity and nonperpendicular electron fiber impingement, which increases Cherenkov emission and trapping. The rotational angle around square fibers shows trapping efficiency variation from the normally incident minimum to a maximum at 45 degrees rotation. For rotation in the plane formed by the fiber axis and its surface normal, trapping efficiency increases with angle from the normal. The Cherenkov spectrum follows the theoretical curve from 300nm to 800nm, the wavelength range of interest defined by silicon photomultiplier and photodiode spectral efficiency. Conclusion: We are able to compute Cherenkov generation in realistic real time scintillating fiber dosimeter geometries. Design parameters incorporate

  6. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  7. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  8. Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

    PubMed Central

    Liu, B; Xu, J; Liu, T; Ouyang, X

    2012-01-01

    Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293

  9. GPU-based fast Monte Carlo dose calculation for proton therapy

    NASA Astrophysics Data System (ADS)

    Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.

    2012-12-01

    Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.

  10. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  11. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  12. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  13. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  14. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    PubMed Central

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Bin; Wang, Lin; Peng, Kuan; Liang, Jimin; Tian, Jie

    2010-01-01

    During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results. PMID:20689705

  15. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    PubMed Central

    Parodi, K; Ferrari, A; Sommerer, F; Paganetti, H

    2008-01-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography / computed tomography) imaging for in-vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modeling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield Unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  16. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  17. Monte-Carlo Opening Books for Amazons

    NASA Astrophysics Data System (ADS)

    Kloetzer, Julien

    Automatically creating opening books is a natural step towards the building of strong game-playing programs, especially when there is little available knowledge about the game. However, while recent popular Monte-Carlo Tree-Search programs showed strong results for various games, we show here that programs based on such methods cannot efficiently use opening books created using algorithms based on minimax. To overcome this issue, we propose to use an MCTS-based technique, Meta-MCTS, to create such opening books. This method, while requiring some tuning to arrive at the best opening book possible, shows promising results to create an opening book for the game of the Amazons, even if this is at the cost of removing its Monte-Carlo part.

  18. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    SciTech Connect

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.; Brown, Forrest B.

    2015-11-19

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.

  19. Parameterization of brachytherapy source phase space file for Monte Carlo-based clinical brachytherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zou, W.; Chen, T.; Kim, L.; Khan, A.; Haffty, B.; Yue, N. J.

    2014-01-01

    A common approach to implementing the Monte Carlo method for the calculation of brachytherapy radiation dose deposition is to use a phase space file containing information on particles emitted from a brachytherapy source. However, the loading of the phase space file during the dose calculation consumes a large amount of computer random access memory, imposing a higher requirement for computer hardware. In this study, we propose a method to parameterize the information (e.g., particle location, direction and energy) stored in the phase space file by using several probability distributions. This method was implemented for dose calculations of a commercial Ir-192 high dose rate source. Dose calculation accuracy of the parameterized source was compared to the results observed using the full phase space file in a simple water phantom and in a clinical breast cancer case. The results showed the parameterized source at a size of 200 kB was as accurate as the phase space file represented source of 1.1 GB. By using the parameterized source representation, a compact Monte Carlo job can be designed, which allows an easy setup for parallel computing in brachytherapy planning.

  20. Investigation of SIBM driven recrystallization in alpha Zirconium based on EBSD data and Monte Carlo modeling

    NASA Astrophysics Data System (ADS)

    Jedrychowski, M.; Bacroix, B.; Salman, O. U.; Tarasiuk, J.; Wronski, S.

    2015-08-01

    The work focuses on the influence of moderate plastic deformation on subsequent partial recrystallization of hexagonal zirconium (Zr702). In the considered case, strain induced boundary migration (SIBM) is assumed to be the dominating recrystallization mechanism. This hypothesis is analyzed and tested in detail using experimental EBSD-OIM data and Monte Carlo computer simulations. An EBSD investigation is performed on zirconium samples, which were channel-die compressed in two perpendicular directions: normal direction (ND) and transverse direction (TD) of the initial material sheet. The maximal applied strain was below 17%. Then, samples were briefly annealed in order to achieve a partly recrystallized state. Obtained EBSD data were analyzed in terms of texture evolution associated with a microstructural characterization, including: kernel average misorientation (KAM), grain orientation spread (GOS), twinning, grain size distributions, description of grain boundary regions. In parallel, Monte Carlo Potts model combined with experimental microstructures was employed in order to verify two main recrystallization scenarios: SIBM driven growth from deformed sub-grains and classical growth of recrystallization nuclei. It is concluded that simulation results provided by the SIBM model are in a good agreement with experimental data in terms of texture as well as microstructural evolution.

  1. Comparison of polynomial approximations to speed up planewave-based quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Parker, William D.; Umrigar, C. J.; Alfè, Dario; Petruzielo, F. R.; Hennig, Richard G.; Wilkins, John W.

    2015-04-01

    The computational cost of quantum Monte Carlo (QMC) calculations of realistic periodic systems depends strongly on the method of storing and evaluating the many-particle wave function. Previous work by Williamson et al. (2001) [35] and Alfè and Gillan, (2004) [36] has demonstrated the reduction of the O (N3) cost of evaluating the Slater determinant with planewaves to O (N2) using localized basis functions. We compare four polynomial approximations as basis functions - interpolating Lagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines, and basis-form (B-) splines (interpolating and smoothing). All these basis functions provide a similar speedup relative to the planewave basis. The pp-splines have eight times the memory requirement of the other methods. To test the accuracy of the basis functions, we apply them to the ground state structures of Si, Al, and MgO. The polynomial approximations differ in accuracy most strongly for MgO, and smoothing B-splines most closely reproduce the planewave value for of the variational Monte Carlo energy. Using separate approximations for the Laplacian of the orbitals increases the accuracy sufficiently to justify the increased memory requirement, making smoothing B-splines, with separate approximation for the Laplacian, the preferred choice for approximating planewave-represented orbitals in QMC calculations.

  2. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  3. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  4. Development of a practical fuel management system for PSBR based on advanced three-dimensional Monte Carlo coupled depletion methodology

    NASA Astrophysics Data System (ADS)

    Tippayakul, Chanatip

    The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was

  5. A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system

    SciTech Connect

    Ma, Jiasen Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G.

    2014-12-15

    Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45

  6. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time

  7. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-01

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation

  8. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  9. Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport.

    PubMed

    Jia, Xun; Gu, Xuejun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B

    2010-06-01

    Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the dose planning method (DPM) Monte Carlo dose calculation package (Sempau et al 2000 Phys. Med. Biol. 45 2263-91) on the GPU architecture under the CUDA platform. The implementation has been tested with respect to the original sequential DPM code on the CPU in phantoms with water-lung-water or water-bone-water slab geometry. A 20 MeV mono-energetic electron point source or a 6 MV photon point source is used in our validation. The results demonstrate adequate accuracy of our GPU implementation for both electron and photon beams in the radiotherapy energy range. Speed-up factors of about 5.0-6.6 times have been observed, using an NVIDIA Tesla C1060 GPU card against a 2.27 GHz Intel Xeon CPU processor. PMID:20463376

  10. New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W

    2015-02-21

    In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient's 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry. PMID:25615567

  11. Monte Carlo-based QA for IMRT of head and neck cancers

    NASA Astrophysics Data System (ADS)

    Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.

    2007-06-01

    It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal

  12. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    NASA Astrophysics Data System (ADS)

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  13. Monte Carlo-based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2014-04-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the

  14. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  15. Performance analysis of the Monte Carlo code MCNP4A for photon-based radiotherapy applications

    SciTech Connect

    DeMarco, J.J.; Solberg, T.D.; Wallace, R.E.; Smathers, J.B.

    1995-12-31

    The Los Alamos code MCNP4A (Monte Carlo M-Particle version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boron neutron capture therapy. This study is designed to evaluate MCNP4A as the dose calculation system for photon-based radiotherapy applications. A graphical user interface (MCNP Radiation Therapy) has been developed which automatically sets up the geometry and photon source requirements for three-dimensional simulations using Computed Tomography (CT) data. Preliminary results suggest the code is capable of calculating satisfactory dose distributions in a variety of simulated homogeneous and heterogeneous phantoms. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. MCNPRT allows the user to analyze the performance of MCNP4A as a function of material, geometry resolution and MCNP4A photon and electron physics parameters. A typical simulation geometry consists of a 10 MV photon point source incident on a 15 x 15 x 15 cm{sup 3} phantom composed of water voxels ranging in size from 10 x 10 x 10 mm{sup 3} to 2 x 2 x 2 mm{sup 3}. As the voxel size is decreased, a larger percentage of time is spent tracking photons through the voxelized geometry as opposed to the secondary electrons. A PRPR Patch file is under development that will optimize photon transport within the simulation phantom specifically for radiotherapy applications. MCNP4A also supports parallel processing capabilities via the Parallel Virtual Machine (PVM) message passing system. A dedicated network of five SUN SPARC2 processors produced a wall-clock speedup of 4.4 based on a simulation phantom containing 5 x 5 x 5 mm{sup 3} water voxels. The code was also tested on the 80 node IBM RS/6000 cluster at the Maui High Performance Computing Center (NHPCC). A non-dedicated system of 75 processors produces a wall clock speedup of 29 relative to one SUN SPARC2 computer.

  16. Monte Carlo-based adaptive EPID dose kernel accounting for different field size responses of imagers

    PubMed Central

    Wang, Song; Gardner, Joseph K.; Gordon, John J.; Li, Weidong; Clews, Luke; Greer, Peter B.; Siebers, Jeffrey V.

    2009-01-01

    The aim of this study is to present an efficient method to generate imager-specific Monte Carlo (MC)-based dose kernels for amorphous silicon-based electronic portal image device dose prediction and determine the effective backscattering thicknesses for such imagers. EPID field size-dependent responses were measured for five matched Varian accelerators from three institutions with 6 MV beams at the source to detector distance (SDD) of 105 cm. For two imagers, measurements were made with and without the imager mounted on the robotic supporting arm. Monoenergetic energy deposition kernels with 0–2.5 cm of water backscattering thicknesses were simultaneously computed by MC to a high precision. For each imager, the backscattering thickness required to match measured field size responses was determined. The monoenergetic kernel method was validated by comparing measured and predicted field size responses at 150 cm SDD, 10×10 cm2 multileaf collimator (MLC) sliding window fields created with 5, 10, 20, and 50 mm gaps, and a head-and-neck (H&N) intensity modulated radiation therapy (IMRT) patient field. Field size responses for the five different imagers deviated by up to 1.3%. When imagers were removed from the robotic arms, response deviations were reduced to 0.2%. All imager field size responses were captured by using between 1.0 and 1.6 cm backscatter. The predicted field size responses by the imager-specific kernels matched measurements for all involved imagers with the maximal deviation of 0.34%. The maximal deviation between the predicted and measured field size responses at 150 cm SDD is 0.39%. The maximal deviation between the predicted and measured MLC sliding window fields is 0.39%. For the patient field, gamma analysis yielded that 99.0% of the pixels have γ<1 by the 2%, 2 mm criteria with a 3% dose threshold. Tunable imager-specific kernels can be generated rapidly and accurately in a single MC simulation. The resultant kernels are imager position

  17. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes.

    PubMed

    Després, Philippe; Funk, Tobias; Shah, Kanai S; Hasegawa, Bruce H

    2007-06-01

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 x 4 array of four channels 8 x 8 mm2 PSAPDs or an 8 x 8 array of 4 x 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 x 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I ( approximately 30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging. PMID:17505089

  18. Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank

    2008-05-01

    Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.

  19. Monte Carlo dose mapping on deforming anatomy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  20. Experimental validation of a rapid Monte Carlo based micro-CT simulator.

    PubMed

    Colijn, A P; Zbijewski, W; Sasov, A; Beekman, F J

    2004-09-21

    We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging. PMID:15509068

  1. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are

  2. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  3. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations.

    PubMed

    Al-Subeihi, Ala A A; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; van Bladeren, Peter J; Rietjens, Ivonne M C M; Punt, Ans

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1'-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1'-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1'-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1'-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1'-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. PMID:25549870

  4. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  5. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The

  6. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  7. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun

    2015-10-01

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum

  8. An anatomically realistic lung model for Monte Carlo-based dose calculations

    SciTech Connect

    Liang Liang; Larsen, Edward W.; Chetty, Indrin J.

    2007-03-15

    Treatment planning for disease sites with large variations of electron density in neighboring tissues requires an accurate description of the geometry. This self-evident statement is especially true for the lung, a highly complex organ having structures with a wide range of sizes that range from about 10{sup -4} to 1 cm. In treatment planning, the lung is commonly modeled by a voxelized geometry obtained using computed tomography (CT) data at various resolutions. The simplest such model, which is often used for QA and validation work, is the atomic mix or mean density model, in which the entire lung is homogenized and given a mean (volume-averaged) density. The purpose of this paper is (i) to describe a new heterogeneous random lung model, which is based on morphological data of the human lung, and (ii) use this model to assess the differences in dose calculations between an actual lung (as represented by our model) and a mean density (homogenized) lung. Eventually, we plan to use the random lung model to assess the accuracy of CT-based treatment plans of the lung. For this paper, we have used Monte Carlo methods to make accurate comparisons between dose calculations for the random lung model and the mean density model. For four realizations of the random lung model, we used a single photon beam, with two different energies (6 and 18 MV) and four field sizes (1x1, 5x5, 10x10, and 20x20 cm{sup 2}). We found a maximum difference of 34% of D{sub max} with the 1x1, 18 MV beam along the central axis (CAX). A ''shadow'' region distal to the lung, with dose reduction up to 7% of D{sub max}, exists for the same realization. The dose perturbations decrease for larger field sizes, but the magnitude of the differences in the shadow region is nearly independent of the field size. We also observe that, compared to the mean density model, the random structures inside the heterogeneous lung can alter the shape of the isodose lines, leading to a broadening or shrinking of the

  9. Automatic commissioning of a GPU-based Monte Carlo radiation dose calculation code for photon radiotherapy

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.

    2014-10-01

    Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The

  10. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    SciTech Connect

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y.

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  11. Evaluation of the interindividual human variation in bioactivation of methyleugenol using physiologically based kinetic modeling and Monte Carlo simulations

    SciTech Connect

    Al-Subeihi, Ala' A.A.; Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert; Bladeren, Peter J. van; Rietjens, Ivonne M.C.M.; Punt, Ans

    2015-03-01

    The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling

  12. Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine

    SciTech Connect

    Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.

    2013-02-15

    Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 Multiplication-Sign 35 cm{sup 2}. The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 Multiplication-Sign 50 Multiplication-Sign 40 cm{sup 3} with an array of voxels (4 Multiplication-Sign 0.3 Multiplication-Sign 0.6 cm{sup 3}= 0.72 cm{sup 3}) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 Multiplication-Sign 35 cm{sup 2} within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 Multiplication-Sign 10 cm{sup 2} field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the

  13. Toward automatic field selection and planning using Monte Carlo-based direct aperture optimization in modulated electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew; DeBlois, François; Seuntjens, Jan

    2010-08-01

    Modulated electron radiotherapy (MERT) has been proven to produce optimal plans for shallow tumors. This study investigates automated approaches to the field determination process in generating optimal MERT plans for few-leaf electron collimator (FLEC)-based MERT, by generating a large database of pre-calculated beamlets stored as phase-space files. Beamlets can be used in an overlapping feathered pattern to reduce the effect of abutting fields, which can contribute to dose inhomogeneities within the target. Beamlet dose calculation was performed by Monte Carlo (MC) simulations prior to direct aperture optimization (DAO). The second part of the study examines a preliminary clinical comparison between FLEC-based MERT and helical TomoTherapy. A MERT plan for spinal irradiation was not able to conform to the PTV dose constraints as closely as the TomoTherapy plan, although the TomoTherapy plan was taken as is, i.e. not Monte Carlo re-calculated. Despite the remaining gradients in the PTV, the MERT plan was superior in reducing the low-dose bath typical of TomoTherapy plans. In conclusion, the FLEC-based MERT planning techniques developed within the study produced promising MERT plans with minimal user input. The phase-space database reduces the MC calculation time and the feathered field pattern improves target homogeneity. With further investigations, FLEC-based MERT will find an important niche in clinical radiation therapy.

  14. Monte Carlo based protocol for cell survival and tumour control probability in BNCT

    NASA Astrophysics Data System (ADS)

    Ye, Sung-Joon

    1999-02-01

    A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the (n, ) reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the (n, ) reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of - for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with

  15. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  16. Testing planetary transit detection methods with grid-based Monte-Carlo simulations.

    NASA Astrophysics Data System (ADS)

    Bonomo, A. S.; Lanza, A. F.

    The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.

  17. Lattice based Kinetic Monte Carlo Simulations of a complex chemical reaction network

    NASA Astrophysics Data System (ADS)

    Danielson, Thomas; Savara, Aditya; Hin, Celine

    Lattice Kinetic Monte Carlo (KMC) simulations offer a powerful alternative to using ordinary differential equations for the simulation of complex chemical reaction networks. Lattice KMC provides the ability to account for local spatial configurations of species in the reaction network, resulting in a more detailed description of the reaction pathway. In KMC simulations with a large number of reactions, the range of transition probabilities can span many orders of magnitude, creating subsets of processes that occur more frequently or more rarely. Consequently, processes that have a high probability of occurring may be selected repeatedly without actually progressing the system (i.e. the forward and reverse process for the same reaction). In order to avoid the repeated occurrence of fast frivolous processes, it is necessary to throttle the transition probabilities in such a way that avoids altering the overall selectivity. Likewise, as the reaction progresses, new frequently occurring species and reactions may be introduced, making a dynamic throttling algorithm a necessity. We present a dynamic steady-state detection scheme with the goal of accurately throttling rate constants in order to optimize the KMC run time without compromising the selectivity of the reaction network. The algorithm has been applied to a large catalytic chemical reaction network, specifically that of methanol oxidative dehydrogenation, as well as additional pathways on CeO2(111) resulting in formaldehyde, CO, methanol, CO2, H2 and H2O as gas products.

  18. Monte Carlo simulation of x-ray scatter based on patient model from digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Liu, Bob; Wu, Tao; Moore, Richard H.; Kopans, Daniel B.

    2006-03-01

    We are developing a breast specific scatter correction method for digital beast tomosynthesis (DBT). The 3D breast volume was initially reconstructed from 15 projection images acquired from a GE prototype tomosynthesis system without correction of scatter. The voxel values were mapped to the tissue compositions using various segmentation schemes. This voxelized digital breast model was entered into a Monte Carlo package simulating the prototype tomosynthesis system. One billion photons were generated from the x-ray source for each projection in the simulation and images of scattered photons were obtained. A primary only projection image was then produced by subtracting the scatter image from the corresponding original projection image which contains contributions from the both primary photons and scatter photons. The scatter free projection images were then used to reconstruct the 3D breast using the same algorithm. Compared with the uncorrected 3D image, the x-ray attenuation coefficients represented by the scatter-corrected 3D image are closer to those derived from the measurement data.

  19. GPU-based Monte Carlo simulation for light propagation in complex heterogeneous tissues.

    PubMed

    Ren, Nunu; Liang, Jimin; Qu, Xiaochao; Li, Jianfeng; Lu, Bingjia; Tian, Jie

    2010-03-29

    As the most accurate model for simulating light propagation in heterogeneous tissues, Monte Carlo (MC) method has been widely used in the field of optical molecular imaging. However, MC method is time-consuming due to the calculations of a large number of photons propagation in tissues. The structural complexity of the heterogeneous tissues further increases the computational time. In this paper we present a parallel implementation for MC simulation of light propagation in heterogeneous tissues whose surfaces are constructed by different number of triangle meshes. On the basis of graphics processing units (GPU), the code is implemented with compute unified device architecture (CUDA) platform and optimized to reduce the access latency as much as possible by making full use of the constant memory and texture memory on GPU. We test the implementation in the homogeneous and heterogeneous mouse models with a NVIDIA GTX 260 card and a 2.40GHz Intel Xeon CPU. The experimental results demonstrate the feasibility and efficiency of the parallel MC simulation on GPU. PMID:20389700

  20. Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.

    2016-03-01

    The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.

  1. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns

    PubMed Central

    Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim

    2015-01-01

    Motivation: Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system’s Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Results: Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. Contact: dorothee.childs@embl.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072485

  2. Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics

    PubMed Central

    Tewari, Shivendra G.; Zhou, Yifan; Otto, Bradley J.; Dash, Ranjan K.; Kwok, Wai-Meng; Beard, Daniel A.

    2015-01-01

    The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance. PMID:25628567

  3. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    SciTech Connect

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-05-20

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 103 - 105 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  4. Statistical modification analysis of helical planetary gears based on response surface method and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Guo, Fan

    2015-11-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system's dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system's dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  5. Adaptation of a Fortran-Based Monte-Carlo Microscopic Black Hole Simulation Program to C++ Based Root

    NASA Astrophysics Data System (ADS)

    Jenkins, C. M.; Godang, R.; Cavaglia, M.; Cremaldi, L.; Summers, D.

    2008-10-01

    The 14 TeV center of mass proton-proton collisions at the LHC opens the possibility for new Physics, including the possible formation of microscopic black holes. A Fortran-based Monte Carlo event generator program called CATFISH (Collider grAviTational FIeld Simulator for black Holes) has been developed at the University of Mississippi to study signatures of microscopic black hole production (http://www.phy.olemiss.edu/GR/catfish). This black hole event generator includes many of the currently accepted theoretical results for microscopic black hole formation. High energy physics data analysis is shifting from Fortran to C++ as the CERN data analysis packages HBOOK and PAW are no longer supported. The C++ based root is replacing these packages. Work done at the University of South Alabama has resulted in a successful inclusion of CATFISH into root. The methods used to interface the Fortran-based CATFISH into the C++ based root will be presented. Benchmark histograms will be presented demonstrating the conversion. Preliminary results will be presented for selecting black hole candidate events in 14 TeV/ center of mass proton-proton collisions.

  6. Development and validation of a measurement-based source model for kilovoltage cone-beam CT Monte Carlo dosimetry simulations

    SciTech Connect

    McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan

    2013-11-15

    measurements by 1.35%–5.31% (mean difference =−3.42%, SD = 1.09%).Conclusions: This work demonstrates the feasibility of using a measurement-based kV CBCT source model to facilitate dose calculations with Monte Carlo methods for both the radiographic and CBCT mode of operation. While this initial work validates simulations against measurements for simple geometries, future work will involve utilizing the source model to investigate kV CBCT dosimetry with more complex anthropomorphic phantoms and patient specific models.

  7. Development and validation of a measurement-based source model for kilovoltage cone-beam CT Monte Carlo dosimetry simulations

    PubMed Central

    McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan

    2013-01-01

    underestimated measurements by 1.35%–5.31% (mean difference = −3.42%, SD = 1.09%). Conclusions: This work demonstrates the feasibility of using a measurement-based kV CBCT source model to facilitate dose calculations with Monte Carlo methods for both the radiographic and CBCT mode of operation. While this initial work validates simulations against measurements for simple geometries, future work will involve utilizing the source model to investigate kV CBCT dosimetry with more complex anthropomorphic phantoms and patient specific models. PMID:24320440

  8. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  9. Monte Carlo Ion Transport Analysis Code.

    Energy Science and Technology Software Center (ESTSC)

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  10. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  11. Kinetic Monte Carlo simulations of proton conductivity

    NASA Astrophysics Data System (ADS)

    Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.

    2014-07-01

    The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.

  12. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  13. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  14. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  15. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  16. A Monte Carlo-based radiation safety assessment for astronauts in an environment with confined magnetic field shielding.

    PubMed

    Geng, Changran; Tang, Xiaobin; Gong, Chunhui; Guan, Fada; Johns, Jesse; Shu, Diyun; Chen, Da

    2015-12-01

    The active shielding technique has great potential for radiation protection in space exploration because it has the advantage of a significant mass saving compared with the passive shielding technique. This paper demonstrates a Monte Carlo-based approach to evaluating the shielding effectiveness of the active shielding technique using confined magnetic fields (CMFs). The International Commission on Radiological Protection reference anthropomorphic phantom, as well as the toroidal CMF, was modeled using the Monte Carlo toolkit Geant4. The penetrating primary particle fluence, organ-specific dose equivalent, and male effective dose were calculated for particles in galactic cosmic radiation (GCR) and solar particle events (SPEs). Results show that the SPE protons can be easily shielded against, even almost completely deflected, by the toroidal magnetic field. GCR particles can also be more effectively shielded against by increasing the magnetic field strength. Our results also show that the introduction of a structural Al wall in the CMF did not provide additional shielding for GCR; in fact it can weaken the total shielding effect of the CMF. This study demonstrated the feasibility of accurately determining the radiation field inside the environment and evaluating the organ dose equivalents for astronauts under active shielding using the CMF. PMID:26484984

  17. A new variable parallel holes collimator for scintigraphic device with validation method based on Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Trinci, G.; Massari, R.; Scandellari, M.; Boccalini, S.; Costantini, S.; Di Sero, R.; Basso, A.; Sala, R.; Scopinaro, F.; Soluri, A.

    2010-09-01

    The aim of this work is to show a new scintigraphic device able to change automatically the length of its collimator in order to adapt the spatial resolution value to gamma source distance. This patented technique replaces the need for collimator change that standard gamma cameras still feature. Monte Carlo simulations represent the best tool in searching new technological solutions for such an innovative collimation structure. They also provide a valid analysis on response of gamma cameras performances as well as on advantages and limits of this new solution. Specifically, Monte Carlo simulations are realized with GEANT4 (GEometry ANd Tracking) framework and the specific simulation object is a collimation method based on separate blocks that can be brought closer and farther, in order to reach and maintain specific spatial resolution values for all source-detector distances. To verify the accuracy and the faithfulness of these simulations, we have realized experimental measurements with identical setup and conditions. This confirms the power of the simulation as an extremely useful tool, especially where new technological solutions need to be studied, tested and analyzed before their practical realization. The final aim of this new collimation system is the improvement of the SPECT techniques, with the real control of the spatial resolution value during tomographic acquisitions. This principle did allow us to simulate a tomographic acquisition of two capillaries of radioactive solution, in order to verify the possibility to clearly distinguish them.

  18. A stochastic model updating method for parameter variability quantification based on response surface models and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Ren, Wei-Xin; Perera, Ricardo

    2012-11-01

    Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties, instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

  19. Optimization of a photoneutron source based on 10 MeV electron beam using Geant4 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Askri, Boubaker

    2015-10-01

    Geant4 Monte Carlo code has been used to conceive and optimize a simple and compact neutron source based on a 10 MeV electron beam impinging on a tungsten target adjoined to a beryllium target. For this purpose, a precise photonuclear reaction cross-section model issued from the International Atomic Energy Agency (IAEA) database was linked to Geant4 to accurately simulate the interaction of low energy bremsstrahlung photons with beryllium material. A benchmark test showed that a good agreement was achieved when comparing the emitted neutron flux spectra predicted by Geant4 and Fluka codes for a beryllium cylinder bombarded with a 5 MeV photon beam. The source optimization was achieved through a two stage Monte Carlo simulation. In the first stage, the distributions of the seven phase space coordinates of the bremsstrahlung photons at the boundaries of the tungsten target were determined. In the second stage events corresponding to photons emitted according to these distributions were tracked. A neutron yield of 4.8 × 1010 neutrons/mA/s was obtained at 20 cm from the beryllium target. A thermal neutron yield of 1.5 × 109 neutrons/mA/s was obtained after introducing a spherical shell of polyethylene as a neutron moderator.

  20. GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI

    SciTech Connect

    Heymann, Frank; Siebenmorgen, Ralf

    2012-05-20

    A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 {mu}m silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.

  1. GPU-based Monte Carlo Dust Radiative Transfer Scheme Applied to Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Heymann, Frank; Siebenmorgen, Ralf

    2012-05-01

    A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman & Wood method to reduce the calculation time, and the Fleck & Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.

  2. Monte Carlo simulation of nitrogen dissociation based on state-resolved cross sections

    SciTech Connect

    Kim, Jae Gang Boyd, Iain D.

    2014-01-15

    State-resolved analyses of N + N{sub 2} are performed using the direct simulation Monte Carlo (DSMC) method. In describing the elastic collisions by a state-resolved method, a state-specific total cross section is proposed. The state-resolved method is constructed from the state-specific total cross section and the rovibrational state-to-state transition cross sections for bound-bound and bound-free transitions taken from a NASA database. This approach makes it possible to analyze the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational energy transfers and the chemical reactions without relying on macroscopic properties and phenomenological models. In nonequilibrium heat bath calculations, the results of present state-resolved DSMC calculations are validated with those of the master equation calculations and the existing shock-tube experimental data for bound-bound and bound-free transitions. In various equilibrium and nonequilibrium heat bath conditions and 2D cylindrical flows, the DSMC calculations by the state-resolved method are compared with those obtained with previous phenomenological DSMC models. In these previous DSMC models, the variable soft sphere, phenomenological Larsen-Borgnakke, quantum kinetic, and total collision energy models are considered. From these studies, it is concluded that the state-resolved method can accurately describe the rotational-to-translational, vibrational-to-translational, and rotational-to-vibrational transfers and quasi-steady state of rotational and vibrational energies in nonequilibrium chemical reactions by state-to-state kinetics.

  3. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms. PMID:17033783

  4. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    spatially low-order kernel, the fundamental eigenvector of which should converge faster than that of continuous kernel. We can then redistribute the fission bank to match the fundamental fission matrix eigenvector, effectively eliminating all higher modes. For all computations here biasing is not used, with the intention of comparing the unaltered, conventional Monte Carlo process with the fission matrix results. The source convergence of standard Monte Carlo criticality calculations are, to some extent, always subject to the characteristics of the problem. This method seeks to partially eliminate this problem-dependence by directly calculating the spatial coupling. The primary cost of this, which has prevented widespread use since its inception [2,3,4], is the extra storage required. To account for the coupling of all N spatial regions to every other region requires storing N{sup 2} values. For realistic problems, where a fine resolution is required for the suppression of discretization error, the storage becomes inordinate. Two factors lead to a renewed interest here: the larger memory available on modern computers and the development of a better storage scheme based on physical intuition. When the distance between source and fission events is short compared with the size of the entire system, saving memory by accounting for only local coupling introduces little extra error. We can gain other information from directly tallying the fission kernel: higher eigenmodes and eigenvalues. Conventional Monte Carlo cannot calculate this data - here we have a way to get new information for multiplying systems. In Ref. [5], higher mode eigenfunctions are analyzed for a three-region 1-dimensional problem and 2-dimensional homogenous problem. We analyze higher modes for more realistic problems. There is also the question of practical use of this information; here we examine a way of using eigenmode information to address the negative confidence interval bias due to inter

  5. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  6. In Silico Generation of Peptides by Replica Exchange Monte Carlo: Docking-Based Optimization of Maltose-Binding-Protein Ligands

    PubMed Central

    Hong Enriquez, Rolando Pablo; Santambrogio, Carlo; Grandori, Rita; Marasco, Daniela; Giordano, Antonio; Scoles, Giacinto; Fortuna, Sara

    2015-01-01

    Short peptides can be designed in silico and synthesized through automated techniques, making them advantageous and versatile protein binders. A number of docking-based algorithms allow for a computational screening of peptides as binders. Here we developed ex-novo peptides targeting the maltose site of the Maltose Binding Protein, the prototypical system for the study of protein ligand recognition. We used a Monte Carlo based protocol, to computationally evolve a set of octapeptides starting from a polialanine sequence. We screened in silico the candidate peptides and characterized their binding abilities by surface plasmon resonance, fluorescence and electrospray ionization mass spectrometry assays. These experiments showed the designed binders to recognize their target with micromolar affinity. We finally discuss the obtained results in the light of further improvement in the ex-novo optimization of peptide based binders. PMID:26252476

  7. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  8. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  9. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  10. Monte Carlo methods in lattice gauge theories

    SciTech Connect

    Otto, S.W.

    1983-01-01

    The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.