Science.gov

Sample records for monte carlo characterization

  1. Accurate characterization of Monte Carlo calculated electron beams for radiotherapy.

    PubMed

    Ma, C M; Faddegon, B A; Rogers, D W; Mackie, T R

    1997-03-01

    Monte Carlo studies of dose distributions in patients treated with radiotherapy electron beams would benefit from generalized models of clinical beams if such models introduce little error into the dose calculations. Methodology is presented for the design of beam models, including their evaluation in terms of how well they preserve the character of the clinical beam, and the effect of the beam models on the accuracy of dose distributions calculated with Monte Carlo. This methodology has been used to design beam models for electron beams from two linear accelerators, with either a scanned beam or a scattered beam. Monte Carlo simulations of the accelerator heads are done in which a record is kept of the particle phase-space, including the charge, energy, direction, and position of every particle that emerges from the treatment head, along with a tag regarding the details of the particle history. The character of the simulated beams are studied in detail and used to design various beam models from a simple point source to a sophisticated multiple-source model which treats particles from different parts of a linear accelerator as from different sub-sources. Dose distributions calculated using both the phase-space data and the multiple-source model agree within 2%, demonstrating that the model is adequate for the purpose of Monte Carlo treatment planning for the beams studied. Benefits of the beam models over phase-space data for dose calculation are shown to include shorter computation time in the treatment head simulation and a smaller disk space requirement, both of which impact on the clinical utility of Monte Carlo treatment planning.

  2. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  3. Visibility assessment : Monte Carlo characterization of temporal variability.

    SciTech Connect

    Laulainen, N.; Shannon, J.; Trexler, E. C., Jr.

    1997-12-12

    Current techniques for assessing the benefits of certain anthropogenic emission reductions are largely influenced by limitations in emissions data and atmospheric modeling capability and by the highly variant nature of meteorology. These data and modeling limitations are likely to continue for the foreseeable future, during which time important strategic decisions need to be made. Statistical atmospheric quality data and apportionment techniques are used in Monte-Carlo models to offset serious shortfalls in emissions, entrainment, topography, statistical meteorology data and atmospheric modeling. This paper describes the evolution of Department of Energy (DOE) Monte-Carlo based assessment models and the development of statistical inputs. A companion paper describes techniques which are used to develop the apportionment factors used in the assessment models.

  4. Monte Carlo Example Programs

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  5. Characterization of parallel-hole collimator using Monte Carlo Simulation

    PubMed Central

    Pandey, Anil Kumar; Sharma, Sanjay Kumar; Karunanithi, Sellam; Kumar, Praveen; Bal, Chandrasekhar; Kumar, Rakesh

    2015-01-01

    Objective: Accuracy of in vivo activity quantification improves after the correction of penetrated and scattered photons. However, accurate assessment is not possible with physical experiment. We have used Monte Carlo Simulation to accurately assess the contribution of penetrated and scattered photons in the photopeak window. Materials and Methods: Simulations were performed with Simulation of Imaging Nuclear Detectors Monte Carlo Code. The simulations were set up in such a way that it provides geometric, penetration, and scatter components after each simulation and writes binary images to a data file. These components were analyzed graphically using Microsoft Excel (Microsoft Corporation, USA). Each binary image was imported in software (ImageJ) and logarithmic transformation was applied for visual assessment of image quality, plotting profile across the center of the images and calculating full width at half maximum (FWHM) in horizontal and vertical directions. Results: The geometric, penetration, and scatter at 140 keV for low-energy general-purpose were 93.20%, 4.13%, 2.67% respectively. Similarly, geometric, penetration, and scatter at 140 keV for low-energy high-resolution (LEHR), medium-energy general-purpose (MEGP), and high-energy general-purpose (HEGP) collimator were (94.06%, 3.39%, 2.55%), (96.42%, 1.52%, 2.06%), and (96.70%, 1.45%, 1.85%), respectively. For MEGP collimator at 245 keV photon and for HEGP collimator at 364 keV were 89.10%, 7.08%, 3.82% and 67.78%, 18.63%, 13.59%, respectively. Conclusion: Low-energy general-purpose and LEHR collimator is best to image 140 keV photon. HEGP can be used for 245 keV and 364 keV; however, correction for penetration and scatter must be applied if one is interested to quantify the in vivo activity of energy 364 keV. Due to heavy penetration and scattering, 511 keV photons should not be imaged with HEGP collimator. PMID:25829730

  6. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  7. Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Bardenet, Rémi

    2013-07-01

    Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  8. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  9. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  10. Symbolic implicit Monte Carlo

    SciTech Connect

    Brooks, E.D. III )

    1989-08-01

    We introduce a new implicit Monte Carlo technique for solving time dependent radiation transport problems involving spontaneous emission. In the usual implicit Monte Carlo procedure an effective scattering term in dictated by the requirement of self-consistency between the transport and implicitly differenced atomic populations equations. The effective scattering term, a source of inefficiency for optically thick problems, becomes an impasse for problems with gain where its sign is negative. In our new technique the effective scattering term does not occur and the excecution time for the Monte Carlo portion of the algorithm is independent of opacity. We compare the performance and accuracy of the new symbolic implicit Monte Carlo technique to the usual effective scattering technique for the time dependent description of a two-level system in slab geometry. We also examine the possibility of effectively exploiting multiprocessors on the algorithm, obtaining supercomputer performance using shared memory multiprocessors based on cheap commodity microprocessor technology. {copyright} 1989 Academic Press, Inc.

  11. Vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.

  12. Characterizing a proton beam scanning system for Monte Carlo dose calculation in patients.

    PubMed

    Grassberger, C; Lomax, Anthony; Paganetti, H

    2015-01-21

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low-energy electrons (<0.6 MeV for 230 MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of-field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5 mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations.

  13. Characterizing a proton beam scanning system for Monte Carlo dose calculation in patients

    NASA Astrophysics Data System (ADS)

    Grassberger, C.; Lomax, Anthony; Paganetti, H.

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low-energy electrons (<0.6 MeV for 230 MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of-field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5 mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations.

  14. Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients

    PubMed Central

    Grassberger, C; Lomax, Tony; Paganetti, H

    2015-01-01

    The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (<0.6MeV for 230MeV protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079

  15. Hamiltonian Monte Carlo algorithm for the characterization of hydraulic conductivity from the heat tracing data

    NASA Astrophysics Data System (ADS)

    Djibrilla Saley, A.; Jardani, A.; Soueid Ahmed, A.; Raphael, A.; Dupont, J. P.

    2016-11-01

    Estimating spatial distributions of the hydraulic conductivity in heterogeneous aquifers has always been an important and challenging task in hydrology. Generally, the hydraulic conductivity field is determined from hydraulic head or pressure measurements. In the present study, we propose to use temperature data as source of information for characterizing the spatial distributions of the hydraulic conductivity field. In this way, we performed a laboratory sandbox experiment with the aim of imaging the heterogeneities of the hydraulic conductivity field from thermal monitoring. During the laboratory experiment, we injected a hot water pulse, which induces a heat plume motion into the sandbox. The induced plume was followed by a set of thermocouples placed in the sandbox. After the temperature data acquisition, we performed a hydraulic tomography using the stochastic Hybrid Monte Carlo approach, also called the Hamiltonian Monte Carlo (HMC) algorithm to invert the temperature data. This algorithm is based on a combination of the Metropolis Monte Carlo method and the Hamiltonian dynamics approach. The parameterization of the inverse problem was done with the Karhunen-Loève (KL) expansion to reduce the dimensionality of the unknown parameters. Our approach has provided successful reconstruction of the hydraulic conductivity field with low computational effort.

  16. Dosimetric characterization of an 192Ir brachytherapy source with the Monte Carlo code PENELOPE.

    PubMed

    Casado, Francisco Javier; García-Pareja, Salvador; Cenizo, Elena; Mateo, Beatriz; Bodineau, Coral; Galán, Pedro

    2010-01-01

    Monte Carlo calculations are highly spread and settled practice to calculate brachytherapy sources dosimetric parameters. In this study, recommendations of the AAPM TG-43U1 report have been followed to characterize the Varisource VS2000 (192)Ir high dose rate source, provided by Varian Oncology Systems. In order to obtain dosimetric parameters for this source, Monte Carlo calculations with PENELOPE code have been carried out. TG-43 formalism parameters have been presented, i.e., air kerma strength, dose rate constant, radial dose function and anisotropy function. Besides, a 2D Cartesian coordinates dose rate in water table has been calculated. These quantities are compared to this source reference data, finding results in good agreement with them. The data in the present study complement published data in the next aspects: (i) TG-43U1 recommendations are followed regarding to phantom ambient conditions and to uncertainty analysis, including statistical (type A) and systematic (type B) contributions; (ii) PENELOPE code is benchmarked for this source; (iii) Monte Carlo calculation methodology differs from that usually published in the way to estimate absorbed dose, leaving out the track-length estimator; (iv) the results of the present work comply with the most recent AAPM and ESTRO physics committee recommendations about Monte Carlo techniques, in regards to dose rate uncertainty values and established differences between our results and reference data. The results stated in this paper provide a complete parameter collection, which can be used for dosimetric calculations as well as a means of comparison with other datasets from this source.

  17. Monte Carlo neutrino oscillations

    SciTech Connect

    Kneller, James P.; McLaughlin, Gail C.

    2006-03-01

    We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

  18. Baseball Monte Carlo Style.

    ERIC Educational Resources Information Center

    Houser, Larry L.

    1981-01-01

    Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

  19. Characterization of dose in stereotactic body radiation therapy of lung lesions via Monte Carlo calculation

    NASA Astrophysics Data System (ADS)

    Rassiah, Premavathy

    Stereotactic Body Radiation Therapy is a new form of treatment where hypofractionated (i.e., large dose fractions), conformal doses are delivered to small extracranial target volumes. This technique has proven to be especially effective for treating lung lesions. The inability of most commercially available algorithms/treatment planning systems to accurately account for electron transport in regions of heterogeneous electron density and tissue interfaces make prediction of accurate doses especially challenging for such regions. Monte Carlo which a model based calculation algorithm has proven to be extremely accurate for dose calculation in both homogeneous and inhomogeneous environment. This study attempts to accurately characterize the doses received by static targets located in the lung, as well as critical structures (contra and ipsi -lateral lung, major airways, esophagus and spinal cord) for the serial tomotherapeutic intensity-modulated delivery method used for stereotactic body radiation therapy at the Cancer Therapy and Research Center. PEREGRINERTM (v 1.6. NOMOS) Monte Carlo, doses were compared to the Finite Sized Pencil Beam/Effective Path Length predicted values from the CORVUS 5.0 planning system. The Monte Carlo based treatment planning system was first validated in both homogenous and inhomogeneous environments. 77 stereotactic body radiation therapy lung patients previously treated with doses calculated using the Finite Sized Pencil Beam/Effective Path Length, algorithm were then retrieved and recalculated with Monte Carlo. All 77 patients plans were also recalculated without inhomogeneity correction in an attempt to counteract the known overestimation of dose at the periphery of the target by EPL with increased attenuation. The critical structures were delineated in order to standardize the contouring. Both the ipsi-lateral and contra-lateral lungs were contoured. The major airways were contoured from the apex of the lungs (trachea) to 4 cm below

  20. SU-E-T-237: Monte Carlo Dosimetric Characterization of the Mobetron Mobile Linac

    SciTech Connect

    Garcia, F; Granero, D; Vijande, J; Ballester, F; Perez-Calatayud, J

    2014-06-01

    Purpose: The aim of this work is to characterize dosimetrically a clinical intraoperative electron beam accelerator, Mobetron (IntraOp Medical, Inc.) in clinical use in our Hospital. Once this first step is completed our purpose is to evaluate shielding requirements for such a device by preparing adequate phase space files. Methods: It is known that electron beam simulation parameters required for state-of-the-art Monte Carlo codes to obtain a good match with measured data, like the mean energy or the FWHM, may not be code-independent due to the different set of process simulated and formalisms involved. Then, to cross-check our results against any issue in the simulation we have compared experimental data (PDD and profiles for electrons in the range 4 to 12 MeV) with simulations performed independently using both Penelope2011 and Geant4 codes. To do so, the geometry and materials of the head of the accelerator have been fully characterized following information provided by the manufacturer. Results: Both simulations agree with experimental data within experimental uncertainties (±1 mm displacement), although small variations (less than 10%) in the mean energy and FWHM are required to match measured values depending on the code used. Conclusion: Independent Monte Carlo simulations were used to obtain an excellent match to measured electron dose distributions. This opens the road to use such data for evaluating shielding requirements which is the main objective of this project.

  1. Monte Carlo and analytical calculations for characterization of gas bremsstrahlung in ILSF insertion devices

    NASA Astrophysics Data System (ADS)

    Salimi, E.; Rahighi, J.; Sardari, D.; Mahdavi, S. R.; Lamehi Rachti, M.

    2014-12-01

    Gas bremsstrahlung is generated in high energy electron storage rings through interaction of the electron beam with the residual gas molecules in vacuum chamber. In this paper, Monte Carlo calculation has been performed to evaluate radiation hazard due to gas bremsstrahlung in the Iranian Light Source Facility (ILSF) insertion devices. Shutter/stopper dimensions is determined and dose rate from the photoneutrons via the giant resonance photonuclear reaction which takes place inside the shutter/stopper is also obtained. Some other characteristics of gas bremsstrahlung such as photon fluence, energy spectrum, angular distribution and equivalent dose in tissue equivalent phantom have also been investigated by FLUKA Monte Carlo code.

  2. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  3. Markov model for characterizing neuropsychologic impairment and Monte Carlo simulation for optimizing efavirenz therapy.

    PubMed

    Bisaso, Kuteesa R; Mukonzo, Jackson K; Ette, Ene I

    2015-11-01

    The study was undertaken to develop a pharmacokinetic-pharmacodynamic model to characterize efavirenz-induced neuropsychologic impairment, given preexistent impairment, which can be used for the optimization of efavirenz therapy via Monte Carlo simulations. The modeling was performed with NONMEM 7.2. A 1-compartment pharmacokinetic model was fitted to efavirenz concentration data from 196 Ugandan patients treated with a 600-mg daily efavirenz dose. Pharmacokinetic parameters and area under the curve (AUC) were derived. Neuropsychologic evaluation of the patients was done at baseline and in week 2 of antiretroviral therapy. A discrete-time 2-state first-order Markov model was developed to describe neuropsychologic impairment. Efavirenz AUC, day 3 efavirenz trough concentration, and female sex increased the probability (P01) of neuropsychologic impairment. Efavirenz oral clearance (CL/F) increased the probability (P10) of resolution of preexistent neuropsychologic impairment. The predictive performance of the reduced (final) model, given the data, incorporating AUC on P01and CL /F on P10, showed that the model adequately characterized the neuropsychologic impairment observed with efavirenz therapy. Simulations with the developed model predicted a 7% overall reduction in neuropsychologic impairment probability at 450 mg of efavirenz. We recommend a reduction in efavirenz dose from 600 to 450 mg, because the 450-mg dose has been shown to produce sustained antiretroviral efficacy.

  4. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  5. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  6. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    SciTech Connect

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    2015-01-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  7. Characterizing the three-orbital Hubbard model with determinant quantum Monte Carlo

    DOE PAGES

    Kung, Y. F.; Chen, C. -C.; Wang, Yao; Huang, E. W.; Nowadnick, E. A.; Moritz, B.; Scalettar, R. T.; Johnston, S.; Devereaux, T. P.

    2016-04-29

    Here, we characterize the three-orbital Hubbard model using state-of-the-art determinant quantum Monte Carlo (DQMC) simulations with parameters relevant to the cuprate high-temperature superconductors. The simulations find that doped holes preferentially reside on oxygen orbitals and that the (π,π) antiferromagnetic ordering vector dominates in the vicinity of the undoped system, as known from experiments. The orbitally-resolved spectral functions agree well with photoemission spectroscopy studies and enable identification of orbital content in the bands. A comparison of DQMC results with exact diagonalization and cluster perturbation theory studies elucidates how these different numerical techniques complement one another to produce a more complete understandingmore » of the model and the cuprates. Interestingly, our DQMC simulations predict a charge-transfer gap that is significantly smaller than the direct (optical) gap measured in experiment. Most likely, it corresponds to the indirect gap that has recently been suggested to be on the order of 0.8 eV, and demonstrates the subtlety in identifying charge gaps.« less

  8. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  9. Isotropic Monte Carlo Grain Growth

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  10. Discrete Diffusion Monte Carlo for grey Implicit Monte Carlo simulations.

    SciTech Connect

    Densmore, J. D.; Urbatsch, T. J.; Evans, T. M.; Buksas, M. W.

    2005-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a hybrid transport-diffusion method for Monte Carlo simulations in diffusive media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Thus, DDMC produces accurate solutions while increasing the efficiency of the Monte Carlo calculation. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for grey Implicit Monte Carlo calculations. First, we employ a diffusion equation that is discretized in space but is continuous time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. In addition, we treat particles incident on an optically thick region using the asymptotic diffusion-limit boundary condition. This interface technique can produce accurate solutions even if the incident particles are distributed anisotropically in angle. Finally, we develop a method for estimating radiation momentum deposition during the DDMC simulation. With a set of numerical examples, we demonstrate the accuracy and efficiency of our improved DDMC method.

  11. Monte Carlo characterization of an ytterbium-169 high dose rate brachytherapy source with analysis of statistical uncertainty.

    PubMed

    Medich, David C; Tries, Mark A; Munro, John J

    2006-01-01

    An ytterbium-169 high dose rate brachytherapy source, distinguished by an intensity-weighted average photon energy of 92.7 keV and a 32.015 +/- 0.009 day half-life, is characterized in terms of the updated AAPM Task Group Report No. 43 specifications using the MCNP5 Monte Carlo computer code. In accordance with these specifications, the investigation included Monte Carlo simulations both in water and air with the in-air photon spectrum filtered to remove low-energy photons below 10 keV. TG-43 dosimetric data including S(K), D(r, lamda), lambda, gL(r), F(r, lamda), phi an(r), and phi(an) were calculated and statistical uncertainties in these parameters were derived and calculated in the appendix.

  12. X-ray fluorescence spectroscopy and Monte Carlo characterization of a unique nuragic artifact (Sardinia, Italy)

    NASA Astrophysics Data System (ADS)

    Brunetti, Antonio; Depalmas, Anna; di Gennaro, Francesco; Serges, Alessandra; Schiavon, Nicola

    2016-07-01

    The chemical composition of a unique bronze artifact known as the "Cesta" ("Basket") belonging to the ancient Nuragic civilization of the Island of Sardinia, Italy has been analyzed by combining X-ray Fluorescence Spectroscopy (XRF) with Monte Carlo simulations using the XRMC code. The "Cesta" had been discovered probably in the XVIII century with the first graphic representation reported around 1761. In a later draft (dated 1764), the basket has been depicted as being carried upside-down on the shoulder of a large bronze warrior Barthélemy (1761), Pinza (1901), Winckelmann (1776) . The two pictorial representations differed only by the presence of handles in the most recent one. XRF measurements revealed that the handles of the object are composed by brass while the other parts are composed by bronze suggesting the handles as being a later addition to the original object. The artifact is covered at its surface by a fairly thick corrosion patina. In order to determine the bronze bulk composition without the need for removing the outer patina, the artifact has been modeled as a two layer object in Monte Carlo simulations.

  13. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  14. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  15. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  16. Ensemble Monte Carlo characterization of graded Al(x)Ga(1-x)As heterojunction barriers

    NASA Technical Reports Server (NTRS)

    Kamoua, R.; East, J. R.; Haddad, G. I.

    1990-01-01

    The current-voltage characteristics of graded Al(x)Ga(1-x)As heterojunction barriers were investigated using a self-consistent ensemble Monte Carlo method. Results are presented for barriers with two doping levels (10 to the 15th/cu cm and 10 to the 17th/cu cm) and two barrier heights (100 and 265 meV). It was found that the lower barrier structure exhibited little rectification at room temperature at both doping levels, while the higher barrier exhibited considerable rectification. The structures with the lower doping value exhibited a smaller current in both forward and reverse regions, due to space-charge effect. The results of studies of the energy and momentum distribution functions along the barrier indicate that the assumption of drifted Maxwellian distribution used in energy-momentum models is not justified for Gamma valley electrons.

  17. Physical characterization of single convergent beam device for teletherapy: theoretical and Monte Carlo approach.

    PubMed

    Figueroa, R G; Valente, M

    2015-09-21

    The main purpose of this work is to determine the feasibility and physical characteristics of a new teletherapy device of radiation therapy based on the application of a convergent x-ray beam of energies like those used in radiotherapy providing highly concentrated dose delivery to the target. We have denominated it Convergent Beam Radio Therapy (CBRT). Analytical methods are developed first in order to determine the dosimetry characteristic of an ideal convergent photon beam in a hypothetical water phantom. Then, using the PENELOPE Monte Carlo code, a similar convergent beam that is applied to the water phantom is compared with that of the analytical method. The CBRT device (Converay(®)) is designed to adapt to the head of LINACs. The converging beam photon effect is achieved thanks to the perpendicular impact of LINAC electrons on a large thin spherical cap target where Bremsstrahlung is generated (high-energy x-rays). This way, the electrons impact upon various points of the cap (CBRT condition), aimed at the focal point. With the X radiation (Bremsstrahlung) directed forward, a system of movable collimators emits many beams from the output that make a virtually definitive convergent beam. Other Monte Carlo simulations are performed using realistic conditions. The simulations are performed for a thin target in the shape of a large, thin, spherical cap, with an r radius of around 10-30 cm and a curvature radius of approximately 70 to 100 cm, and a cubed water phantom centered in the focal point of the cap. All the interaction mechanisms of the Bremsstrahlung radiation with the phantom are taken into consideration for different energies and cap thicknesses. Also, the magnitudes of the electric and/or magnetic fields, which are necessary to divert clinical-use electron beams (0.1 to 20 MeV), are determined using electromagnetism equations with relativistic corrections. This way the above-mentioned beam is manipulated and guided for its perpendicular impact

  18. Physical characterization of single convergent beam device for teletherapy: theoretical and Monte Carlo approach.

    PubMed

    Figueroa, R G; Valente, M

    2015-09-21

    The main purpose of this work is to determine the feasibility and physical characteristics of a new teletherapy device of radiation therapy based on the application of a convergent x-ray beam of energies like those used in radiotherapy providing highly concentrated dose delivery to the target. We have denominated it Convergent Beam Radio Therapy (CBRT). Analytical methods are developed first in order to determine the dosimetry characteristic of an ideal convergent photon beam in a hypothetical water phantom. Then, using the PENELOPE Monte Carlo code, a similar convergent beam that is applied to the water phantom is compared with that of the analytical method. The CBRT device (Converay(®)) is designed to adapt to the head of LINACs. The converging beam photon effect is achieved thanks to the perpendicular impact of LINAC electrons on a large thin spherical cap target where Bremsstrahlung is generated (high-energy x-rays). This way, the electrons impact upon various points of the cap (CBRT condition), aimed at the focal point. With the X radiation (Bremsstrahlung) directed forward, a system of movable collimators emits many beams from the output that make a virtually definitive convergent beam. Other Monte Carlo simulations are performed using realistic conditions. The simulations are performed for a thin target in the shape of a large, thin, spherical cap, with an r radius of around 10-30 cm and a curvature radius of approximately 70 to 100 cm, and a cubed water phantom centered in the focal point of the cap. All the interaction mechanisms of the Bremsstrahlung radiation with the phantom are taken into consideration for different energies and cap thicknesses. Also, the magnitudes of the electric and/or magnetic fields, which are necessary to divert clinical-use electron beams (0.1 to 20 MeV), are determined using electromagnetism equations with relativistic corrections. This way the above-mentioned beam is manipulated and guided for its perpendicular impact

  19. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials.

  20. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. PMID:26188622

  1. Monte Carlo implementation, validation, and characterization of a 120 leaf MLC

    SciTech Connect

    Fix, Michael K.; Volken, Werner; Frei, Daniel; Frauchiger, Daniel; Born, Ernst J.; Manser, Peter

    2011-10-15

    Purpose: Recently, the new high definition multileaf collimator (HD120 MLC) was commercialized by Varian Medical Systems providing high resolution in the center section of the treatment field. The aim of this work is to investigate the characteristics of the HD120 MLC using Monte Carlo (MC) methods. Methods: Based on the information of the manufacturer, the HD120 MLC was implemented into the already existing Swiss MC Plan (SMCP). The implementation has been configured by adjusting the physical density and the air gap between adjacent leaves in order to match transmission profile measurements for 6 and 15 MV beams of a Novalis TX. These measurements have been performed in water using gafchromic films and an ionization chamber at an SSD of 95 cm and a depth of 5 cm. The implementation was validated by comparing diamond measured and calculated penumbra values (80%-20%) for different field sizes and water depths. Additionally, measured and calculated dose distributions for a head and neck IMRT case using the DELTA{sup 4} phantom have been compared. The validated HD120 MLC implementation has been used for its physical characterization. For this purpose, phase space (PS) files have been generated below the fully closed multileaf collimator (MLC) of a 40 x 22 cm{sup 2} field size for 6 and 15 MV. The PS files have been analyzed in terms of energy spectra, mean energy, fluence, and energy fluence in the direction perpendicular to the MLC leaves and have been compared with the corresponding data using the well established Varian 80 leaf (MLC80) and Millennium M120 (M120 MLC) MLCs. Additionally, the impact of the tongue and groove design of the MLCs on dose has been characterized. Results: Calculated transmission values for the HD120 MLC are 1.25% and 1.34% in the central part of the field for the 6 and 15 MV beam, respectively. The corresponding ionization chamber measurements result in a transmission of 1.20% and 1.35%. Good agreement has been found for the comparison

  2. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.

    2002-12-01

    if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.

  3. Monte Carlo Experiments: Design and Implementation.

    ERIC Educational Resources Information Center

    Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian

    2001-01-01

    Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)

  4. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  5. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  6. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  7. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    SciTech Connect

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S.

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  8. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  9. Characterization of scattered radiation in kV CBCT images using Monte Carlo simulations

    SciTech Connect

    Jarry, Genevieve; Graham, Sean A.; Moseley, Douglas J.; Jaffray, David J.; Siewerdsen, Jeffrey H.; Verhaegen, Frank

    2006-11-15

    Kilovoltage (kV) cone beam computed tomography (CBCT) images suffer from a substantial scatter contribution. In this study, Monte Carlo (MC) simulations are used to evaluate the scattered radiation present in projection images. These predicted scatter distributions are also used as a scatter correction technique. Images were acquired using a kV CBCT bench top system. The EGSnrc MC code was used to model the flat panel imager, the phantoms, and the x-ray source. The x-ray source model was validated using first and second half-value layers (HVL) and profile measurements. The HVLs and the profile were found to agree within 3% and 6%, respectively. MC simulated and measured projection images for a cylindrical water phantom and for an anthropomorphic head phantom agreed within 8% and 10%. A modified version of the DOSXYZnrc MC code was used to score phase space files with identified scattered and primary particles behind the phantoms. The cone angle, the source-to-detector distance, the phantom geometry, and the energy were varied to determine their effect on the scattered radiation distribution. A scatter correction technique was developed in which the MC predicted scatter distribution is subtracted from the projections prior to reconstruction. Preliminary testing of the procedure was done with an anthropomorphic head phantom and a contrast phantom. Contrast and profile measurements were obtained for the scatter corrected and noncorrected images. An improvement of 3% for contrast between solid water and a liver insert and 11% between solid water and a Teflon insert were obtained and a significant reduction in cupping and streaking artifacts was observed.

  10. Synthesis, characterization and Monte Carlo simulation of CoFe2O4/Polyvinylpyrrolidone nanocomposites: The coercivity investigation

    NASA Astrophysics Data System (ADS)

    Mirzaee, Sh; Farjami shayesteh, S.; Mahdavifar, S.; Hekmatara, S. Hoda.

    2015-11-01

    To study the influence of polymer matrix on the effective magnetic anisotropy constant and coercivity of magnetic nanoparticles, we have synthesized the Cobalt ferrite/Polyvinylpyrrolidone (PVP) nanocomposites by co-precipitation method in four different processes. In addition the Monte Carlo simulation and law of approach to the saturation magnetization have been applied to achieve the anisotropy constants. The obtained experimental and theoretical results showed a decrease in anisotropy constant relative to the bulk cobalt ferrite. We have showed that the PVP matrix can interact with metal cations and made them approximately immobilized to participate in spinel structure. Hence different anisotropy constants or coercivity were obtained for synthesized nanocomposites. In addition, PVP matrix can attach to the surface of magnetic particles and make them approximately non-interacting. The synthesized samples have been characterized by Fourier transform infrared spectroscopy (FT-IR) and X-ray diffraction (XRD). Magnetic measurements were made at room temperature using a vibrating sample magnetometer (VSM).

  11. Present status of vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1987-01-01

    Monte Carlo applications have traditionally been limited by the large amounts of computer time required to produce acceptably small statistical uncertainties, so the immediate benefit of vectorization is an increase in either the number of jobs completed or the number of particles processed per job, typically by one order of magnitude or more. This results directly in improved engineering design analyses, since Monte Carlo methods are used as standards for correcting more approximate methods. The relatively small number of vectorized programs is a consequence of the newness of vectorized Monte Carlo, the difficulties of nonportability, and the very large development effort required to rewrite or restructure Monte Carlo codes for vectorization. Based on the successful efforts to date, it may be concluded that Monte Carlo vectorization will spread to increasing numbers of codes and applications. The possibility of multitasking provides even further motivation for vectorizing Monte Carlo, since the step from vector to multitasked vector is relatively straightforward.

  12. Characterization of scatter in digital mammography from use of Monte Carlo simulations and comparison to physical measurements

    SciTech Connect

    Leon, Stephanie M. Wagner, Louis K.; Brateman, Libby F.

    2014-11-01

    Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a

  13. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  14. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  15. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  16. Uncertainty Propagation with Fast Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.

    2014-04-01

    Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.

  17. Monte Carlo Simulations for Radiobiology

    NASA Astrophysics Data System (ADS)

    Ackerman, Nicole; Bazalova, Magdalena; Chang, Kevin; Graves, Edward

    2012-02-01

    The relationship between tumor response and radiation is currently modeled as dose, quantified on the mm or cm scale through measurement or simulation. This does not take into account modern knowledge of cancer, including tissue heterogeneities and repair mechanisms. We perform Monte Carlo simulations utilizing Geant4 to model radiation treatment on a cellular scale. Biological measurements are correlated to simulated results, primarily the energy deposit in nuclear volumes. One application is modeling dose enhancement through the use of high-Z materials, such gold nanoparticles. The model matches in vitro data and predicts dose enhancement ratios for a variety of in vivo scenarios. This model shows promise for both treatment design and furthering our understanding of radiobiology.

  18. Monte-Carlo characterization of a miniature source of characteristic X rays based on an implantable needle

    SciTech Connect

    Safronov, V. V.; Sozontov, E. A.; Gutman, G.

    2013-05-15

    A new concept of an X-ray brachytherapy setup based on the use of fluorescence from a secondary target placed at the tip of an implantable needle is proposed. Spatial dose-rate distributions for four combinations of secondary target materials and shapes are calculated by the Monte-Carlo method.

  19. Characterization and Monte Carlo simulation of single ion Geiger mode avalanche diodes integrated with a quantum dot nanostructure

    NASA Astrophysics Data System (ADS)

    Sharma, Peter; Abraham, J. B. S.; Ten Eyck, G.; Childs, K. D.; Bielejec, E.; Carroll, M. S.

    Detection of single ion implantation within a nanostructure is necessary for the high yield fabrication of implanted donor-based quantum computing architectures. Single ion Geiger mode avalanche (SIGMA) diodes with a laterally integrated nanostructure capable of forming a quantum dot were fabricated and characterized using photon pulses. The detection efficiency of this design was measured as a function of wavelength, lateral position, and for varying delay times between the photon pulse and the overbias detection window. Monte Carlo simulations based only on the random diffusion of photo-generated carriers and the geometrical placement of the avalanche region agrees qualitatively with device characterization. Based on these results, SIGMA detection efficiency appears to be determined solely by the diffusion of photo-generated electron-hole pairs into a buried avalanche region. Device performance is then highly dependent on the uniformity of the underlying silicon substrate and the proximity of photo-generated carriers to the silicon-silicon dioxide interface, which are the most important limiting factors for reaching the single ion detection limit with SIGMA detectors. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  20. SU-D-19A-04: Parameter Characterization of Electron Beam Monte Carlo Phase Space of TrueBeam Linacs

    SciTech Connect

    Rodrigues, A; Yin, F; Wu, Q; Sawkey, D

    2014-06-01

    Purpose: For TrueBeam Monte Carlo simulations, Varian does not distribute linac head geometry and material compositions, instead providing a phase space file (PSF) for the users. The PSF has a finite number of particle histories and can have very large file size, yet still contains inherent statistical noises. The purpose of this study is to characterize the electron beam PSF with parameters. Methods: The PSF is a snapshot of all particles' information at a given plane above jaws including type, energy, position, and directions. This study utilized a preliminary TrueBeam PSF, of which validation against measurement is presented in another study. To characterize the PSF, distributions of energy, position, and direction of all particles are analyzed as piece-wise parameterized functions of radius and polar angle. Subsequently, a pseudo PSF was generated based on this characterization. Validation was assessed by directly comparing the true and pseudo PSFs, and by using both PSFs in the down-stream MC simulations (BEAMnrc/DOSXYZnrc) and comparing dose distributions for 3 applicators at 15 MeV. Statistical uncertainty of 4% was limited by the number of histories in the original PSF. Percent depth dose (PDD) and orthogonal (PRF) profiles at various depths were evaluated. Results: Preliminary results showed that this PSF parameterization was accurate, with no visible differences between original and pseudo PSFs except at the edge (6 cm off axis), which did not impact dose distributions in phantom. PDD differences were within 1 mm for R{sub 7} {sub 0}, R{sub 5} {sub 0}, R{sub 3} {sub 0}, and R{sub 1} {sub 0}, and PRF field size and penumbras were within 2 mm. Conclusion: A PSF can be successfully characterized by distributions for energy, position, and direction as parameterized functions of radius and polar angles; this facilitates generating sufficient particles at any statistical precision. Analyses for all other electron energies are under way and results will be

  1. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  2. Monte Carlo Ion Transport Analysis Code.

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  3. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  4. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  5. Monte Carlo simulation of aorta autofluorescence

    NASA Astrophysics Data System (ADS)

    Kuznetsova, A. A.; Pushkareva, A. E.

    2016-08-01

    Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.

  6. Objective characterization of bruise evolution using photothermal depth profiling and Monte Carlo modeling

    NASA Astrophysics Data System (ADS)

    Vidovič, Luka; Milanič, Matija; Majaron, Boris

    2015-01-01

    Pulsed photothermal radiometry (PPTR) allows noninvasive determination of laser-induced temperature depth profiles in optically scattering layered structures. The obtained profiles provide information on spatial distribution of selected chromophores such as melanin and hemoglobin in human skin. We apply the described approach to study time evolution of incidental bruises (hematomas) in human subjects. By combining numerical simulations of laser energy deposition in bruised skin with objective fitting of the predicted and measured PPTR signals, we can quantitatively characterize the key processes involved in bruise evolution (i.e., hemoglobin mass diffusion and biochemical decomposition). Simultaneous analysis of PPTR signals obtained at various times post injury provides an insight into the variations of these parameters during the bruise healing process. The presented methodology and results advance our understanding of the bruise evolution and represent an important step toward development of an objective technique for age determination of traumatic bruises in forensic medicine.

  7. Characterization of naturally occurring radioactive materials in Libyan oil pipe scale using a germanium detector and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Habib, A. S.; Shutt, A. L.; Regan, P. H.; Matthews, M. C.; Alsulaiti, H.; Bradley, D. A.

    2014-02-01

    Radioactive scale formation in various oil production facilities is acknowledged to pose a potential significant health and environmental issue. The presence of such an issue in Libyan oil fields was recognized as early as 1998. The naturally occurring radioactive materials (NORM) involved in this matter are radium isotopes (226Ra and 228Ra) and their decay products, precipitating into scales formed on the surfaces of production equipment. A field trip to a number of onshore Libyan oil fields has indicated the existence of elevated levels of specific activity in a number of locations in some of the more mature oil fields. In this study, oil scale samples collected from different parts of Libya have been characterized using gamma spectroscopy through use of a well shielded HPGe spectrometer. To avoid potential alpha-bearing dust inhalation and in accord with safe working practices at this University, the samples, contained in plastic bags and existing in different geometries, are not permitted to be opened. MCNP, a Monte Carlo simulation code, is being used to simulate the spectrometer and the scale samples in order to obtain the system absolute efficiency and then to calculate sample specific activities. The samples are assumed to have uniform densities and homogeneously distributed activity. Present results are compared to two extreme situations that were assumed in a previous study: (i) with the entire activity concentrated at a point on the sample surface proximal to the detector, simulating the sample lowest activity, and; (ii) with the entire activity concentrated at a point on the sample surface distal to the detector, simulating the sample highest activity.

  8. Characterization of exposure-dependent eigenvalue drift using Monte Carlo based nuclear fuel management

    NASA Astrophysics Data System (ADS)

    Xoubi, Ned

    2005-12-01

    The ability to accurately predict the multiplication factor (keff) of a nuclear reactor core as a function of exposure continues to be an elusive task for core designers despite decades of advances in computational methods. The difference between a predicted eigenvalue (target) and the actual eigenvalue at critical reactor conditions is herein referred to as the "eigenvalue drift." This dissertation studies exposure-dependent eigenvalue drift using MCNP-based fuel management analysis of the ORNL High Flux Isotope Reactor core. Spatial-dependent burnup is evaluated using the MONTEBURNS and ALEPH codes to link MCNP to ORIGEN to help analyze the behavior of keff as a function of fuel exposure. Understanding the exposure-dependent eigenvalue drift of a nuclear reactor is of particular relevance when trying to predict the impact of major design changes upon fuel cycle behavior and length. In this research, the design of an advanced HFIR core with a fuel loading of 12 kg of 235U is contrasted against the current loading of 9.4 kg. The goal of applying exposure dependent eigenvalue characterization is to produce a more accurate prediction of the fuel cycle length than prior analysis techniques, and to improve our understanding of the reactivity behavior of the core throughout the cycle. This investigation predicted a fuel cycle length of 40 days, representing a 50% increase in the cycle length in response to a 25% increase in fuel loading. The average burnup increased by about 48 MWd/kg U and it was confirmed that the excess reactivity can be controlled with the present design and arrangement of control elements throughout the core's life. Another major design change studied was the effect of installing an internal beryllium reflector upon cycle length. Exposure dependent eigenvalue predictions indicate that the actual benefit could be twice as large as that originally assessed via beginning-of-life (BOL) analyses.

  9. Evaluation of Reaction Rate Theory and Monte Carlo Methods for Application to Radiation-Induced Microstructural Characterization

    SciTech Connect

    Stoller, Roger E; Golubov, Stanislav I; Becquart, C. S.; Domain, C.

    2007-08-01

    The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory, Monte Carlo, or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales, and timescales from those characteristic of diffusion to long-term microstructural evolution (~s to years). Although the rate theory and Monte Carlo models can be used simulate the same phenomena, some of the details are handled quite differently in the two approaches. Models employing the rate theory have been extensively used to describe radiation-induced phenomena such as void swelling and irradiation creep. The primary approximations in such models are time- and spatial averaging of the radiation damage source term, and spatial averaging of the microstructure into an effective medium. Kinetic Monte Carlo models can account for these spatial and temporal correlations; their primary limitation is the computational burden which is related to the size of the simulation cell. A direct comparison of RT and object kinetic MC simulations has been made in the domain of point defect cluster dynamics modeling, which is relevant to the evolution (both nucleation and growth) of radiation-induced defect structures. The primary limitations of the OKMC model are related to computational issues. Even with modern computers, the maximum simulation cell size and the maximum dose (typically much less than 1 dpa) that can be simulated are limited. In contrast, even very detailed RT models can simulate microstructural evolution for doses up 100 dpa or greater in clock times that are relatively short. Within the context of the effective medium, essentially any defect density can be simulated. Overall, the agreement between the two methods is best for irradiation conditions which produce a high density of defects (lower temperature and higher displacement rate), and for

  10. Hybrid algorithms in quantum Monte Carlo

    SciTech Connect

    Esler, Kenneth P; Mcminis, Jeremy; Morales, Miguel A; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M

    2012-01-01

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  11. Experimental Monte Carlo Quantum Process Certification

    NASA Astrophysics Data System (ADS)

    Steffen, L.; da Silva, M. P.; Fedorov, A.; Baur, M.; Wallraff, A.

    2012-06-01

    Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data postprocessing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data are compared directly with an ideal process using Monte Carlo sampling. Here, we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of 2-qubit gates, such as the CPHASE and the CNOT gate, and 3-qubit gates, such as the Toffoli gate and two sequential CPHASE gates.

  12. Monte Carlo Shielding Analysis Capabilities with MAVRIC

    SciTech Connect

    Peplow, Douglas E.

    2011-01-01

    Monte Carlo shielding analysis capabilities in SCALE 6 are centered on the CADIS methodology Consistent Adjoint Driven Importance Sampling. CADIS is used to create an importance map for space/energy weight windows as well as a biased source distribution. New to SCALE 6 are the Monaco functional module, a multi-group fixed-source Monte Carlo transport code, and the MAVRIC sequence (Monaco with Automated Variance Reduction Using Importance Calculations). MAVRIC uses the Denovo code (also new to SCALE 6) to compute coarse-mesh discrete ordinates solutions which are used by CADIS to form an importance map and biased source distribution for the Monaco Monte Carlo code. MAVRIC allows the user to optimize the Monaco calculation for a specify tally using the CADIS method with little extra input compared to a standard Monte Carlo calculation. When computing several tallies at once or a mesh tally over a large volume of space, an extension of the CADIS method called FW-CADIS can be used to help the Monte Carlo simulation spread particles over phase space to get more uniform relative uncertainties.

  13. Shell model the Monte Carlo way

    SciTech Connect

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  14. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  15. Monte Carlo electron/photon transport

    SciTech Connect

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs.

  16. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  17. Monte carlo simulations of organic photovoltaics.

    PubMed

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  18. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  19. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  20. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  1. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  2. Alloy characterization of a 7th Century BC archeological bronze vase - Overcoming patina constraints using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Manso, M.; Schiavon, N.; Queralt, I.; Arruda, A. M.; Sampaio, J. M.; Brunetti, A.

    2015-05-01

    In this work we evaluate the composition of a bronze alloy using X-ray fluorescence spectrometry (XRF) and Monte Carlo (MC) simulations. For this purpose, a 7th Century BC archeological vase from the SW Iberian Peninsula, displaying a well formed corrosion patina was analyzed by means of a portable X-ray fluorescence spectrometer. Realistic MC simulations of the experimental setup were performed with the XRMC code package which is based on an intensive use of variance-reduction techniques and uses XRAYLIB a constantly updated X-ray library of atomic data. A single layer model was applied for simulating XRF of polished/pristine bronze whereas a two-or-three-layer model was developed for bronze covered respectively by a corrosion patina alone or coupled with a superficial soil derived crust. These simulations took into account corrosion (cerussite (PbCO3), cuprite (Cu2O), malachite (Cu2CO3(OH)2), litharge (PbO)) and soil derived products (goethite (FeO(OH)) and quartz (SiO2)) identified by means of X-ray diffraction and Raman micro analytical techniques. Results confirm previous research indicating that the XRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + soil derived products' crust is too thick, X-rays from the alloy substrate are not able to exit the sample. Quantitative results based on MC simulations indicate that the vase is made of a lead-bronze alloy: Mn (0.2%), Fe (1.0%), Cu (81.8%), As (0.5%), Ag (0.6%), Sn (8.0%) and Pb (8.0%).

  3. Monte Carlo simulations of lattice gauge theories

    SciTech Connect

    Rebbi, C

    1980-02-01

    Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

  4. Advances in Monte Carlo computer simulation

    NASA Astrophysics Data System (ADS)

    Swendsen, Robert H.

    2011-03-01

    Since the invention of the Metropolis method in 1953, Monte Carlo methods have been shown to provide an efficient, practical approach to the calculation of physical properties in a wide variety of systems. In this talk, I will discuss some of the advances in the MC simulation of thermodynamics systems, with an emphasis on optimization to obtain a maximum of useful information.

  5. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  6. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  7. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  8. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  9. Bayesian methods, maximum entropy, and quantum Monte Carlo

    SciTech Connect

    Gubernatis, J.E.; Silver, R.N. ); Jarrell, M. )

    1991-01-01

    We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

  10. MontePython: Implementing Quantum Monte Carlo using Python

    NASA Astrophysics Data System (ADS)

    Nilsen, Jon Kristian

    2007-11-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

  11. 3D polymer gel dosimetry and Geant4 Monte Carlo characterization of novel needle based X-ray source

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Sozontov, E.; Safronov, V.; Gutman, G.; Strumban, E.; Jiang, Q.; Li, S.

    2010-11-01

    In the recent years, there have been a few attempts to develop a low energy x-ray radiation sources alternative to conventional radioisotopes used in brachytherapy. So far, all efforts have been centered around the intent to design an interstitial miniaturized x-ray tube. Though direct irradiation of tumors looks very promising, the known insertable miniature x-ray tubes have many limitations: (a) difficulties with focusing and steering the electron beam to the target; (b)necessity to cool the target to increase x-ray production efficiency; (c)impracticability to reduce the diameter of the miniaturized x-ray tube below 4mm (the requirement to decrease the diameter of the x-ray tube and the need to have a cooling system for the target have are mutually exclusive); (c) significant limitations in changing shape and energy of the emitted radiation. The specific aim of this study is to demonstrate the feasibility of a new concept for an insertable low-energy needle x-ray device based on simulation with Geant4 Monte Carlo code and to measure the dose rate distribution for low energy (17.5 keV) x-ray radiation with the 3D polymer gel dosimetry.

  12. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  13. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  14. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  15. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  16. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  17. Monte Carlo simulations of fluid vesicles.

    PubMed

    Sreeja, K K; Ipsen, John H; Sunil Kumar, P B

    2015-07-15

    Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations. PMID:26087479

  18. Monte Carlo simulations of fluid vesicles

    NASA Astrophysics Data System (ADS)

    Sreeja, K. K.; Ipsen, John H.; Kumar, P. B. Sunil

    2015-07-01

    Lipid vesicles are closed two dimensional fluid surfaces that are studied extensively as model systems for understanding the physical properties of biological membranes. Here we review the recent developments in the Monte Carlo techniques for simulating fluid vesicles and discuss some of their applications. The technique, which treats the membrane as an elastic sheet, is most suitable for the study of large scale conformations of membranes. The model can be used to study vesicles with fixed and varying topologies. Here we focus on the case of multi-component membranes with the local lipid and protein composition coupled to the membrane curvature leading to a variety of shapes. The phase diagram is more intriguing in the case of fluid vesicles having an in-plane orientational order that induce anisotropic directional curvatures. Methods to explore the steady state morphological structures due to active flux of materials have also been described in the context of Monte Carlo simulations.

  19. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  20. Monte Carlo modeling of exospheric bodies - Mercury

    NASA Technical Reports Server (NTRS)

    Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.

    1978-01-01

    In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.

  1. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  2. Monte Carlo simulation of Alaska wolf survival

    NASA Astrophysics Data System (ADS)

    Feingold, S. J.

    1996-02-01

    Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

  3. Monte Carlo simulation of Touschek effect.

    SciTech Connect

    Xiao, A.; Borland, M.; Accelerator Systems Division

    2010-07-30

    We present a Monte Carlo method implementation in the code elegant for simulating Touschek scattering effects in a linac beam. The local scattering rate and the distribution of scattered electrons can be obtained from the code either for a Gaussian-distributed beam or for a general beam whose distribution function is given. In addition, scattered electrons can be tracked through the beam line and the local beam-loss rate and beam halo information recorded.

  4. Quantum Monte Carlo with known sign structures

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan

    We investigate the merits of different Hubbard-Stratonovich transformations (including fermionic ones) for the description of interacting fermion systems, focusing on the single band Hubbard model as a model system. In particular we revisit an old proposal of Batrouni and Forcrand (PRB 48, 589 1993) for determinant quantum Monte Carlo simulations, in which the signs of all configurations is known beforehand. We will discuss different ways that this knowledge can be used to make more accurate predictions and simulations.

  5. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  6. Monte Carlo Generators for the LHC

    NASA Astrophysics Data System (ADS)

    Worek, M.

    2007-11-01

    The status of two Monte Carlo generators, HELAC-PHEGAS, a program for multi-jet processes and VBFNLO, a parton level program for vector boson fusion processes at NLO QCD, is briefly presented. The aim of these tools is the simulation of events within the Standard Model at current and future high energy experiments, in particular the LHC. Some results related to the production of multi-jet final states at the LHC are also shown.

  7. Monte Carlo small-sample perturbation calculations

    SciTech Connect

    Feldman, U.; Gelbard, E.; Blomquist, R.

    1983-01-01

    Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically.

  8. jTracker and Monte Carlo Comparison

    NASA Astrophysics Data System (ADS)

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  9. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  10. Dosimetric characterization and organ dose assessment in digital breast tomosynthesis: Measurements and Monte Carlo simulations using voxel phantoms

    SciTech Connect

    Baptista, Mariana Di Maria, Salvatore; Barros, Sílvia; Vaz, Pedro; Figueira, Catarina; Sarmento, Marta; Orvalho, Lurdes

    2015-07-15

    Purpose: Due to its capability to more accurately detect deep lesions inside the breast by removing the effect of overlying anatomy, digital breast tomosynthesis (DBT) has the potential to replace the standard mammography technique in clinical screening exams. However, the European Guidelines for DBT dosimetry are still a work in progress and there are little data available on organ doses other than to the breast. It is, therefore, of great importance to assess the dosimetric performance of DBT with respect to the one obtained with standard digital mammography (DM) systems. The aim of this work is twofold: (i) to study the dosimetric properties of a combined DBT/DM system (MAMMOMAT Inspiration Siemens{sup ®}) for a tungsten/rhodium (W/Rh) anode/filter combination and (ii) to evaluate organs doses during a DBT examination. Methods: For the first task, measurements were performed in manual and automatic exposure control (AEC) modes, using two homogeneous breast phantoms: a PMMA slab phantom and a 4 cm thick breast-shaped rigid phantom, with 50% of glandular tissue in its composition. Monte Carlo (MC) simulations were performed using Monte Carlo N-Particle eXtended v.2.7.0. A MC model was implemented to mimic DM and DBT acquisitions for a wide range of x-ray spectra (24 –34 kV). This was used to calculate mean glandular dose (MGD) and to compute series of backscatter factors (BSFs) that could be inserted into the DBT dosimetric formalism proposed by Dance et al. Regarding the second aim of the study, the implemented MC model of the clinical equipment, together with a female voxel phantom (“Laura”), was used to calculate organ doses considering a typical DBT acquisition. Results were compared with a standard two-view mammography craniocaudal (CC) acquisition. Results: Considering the AEC mode, the acquisition of a single CC view results in a MGD ranging from 0.53 ± 0.07 mGy to 2.41 ± 0.31 mGy in DM mode and from 0.77 ± 0.11 mGy to 2.28 ± 0.32 mGy in DBT mode

  11. Characterization of emergent leakage neutrons from multiple layers of hydrogen/water in the lunar regolith by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    SU, J.; Sagdeev, R.; Usikov, D.; Chin, G.; Boyer, L.; Livengood, T. A.; McClanahan, T. P.; Murray, J.; Starr, R. D.

    2013-12-01

    Introduction: The leakage flux of lunar neutrons produced by precipitation of galactic cosmic ray (GCR) particles in the upper layer of the lunar regolith and measured by orbital instruments such as the Lunar Exploration Neutron Detector (LEND) is investigated by Monte Carlo simulation. Previous Monte Carlo (MC) simulations have been used to investigate neutron production and leakage from the lunar surface to assess the elemental composition of lunar soil [1-6] and its effect on the leakage neutron flux. We investigate effects on the emergent flux that depend on the physical distribution of hydrogen within the regolith. We use the software package GEANT4 [7] to calculate neutron production from spallation by GCR particles [8,9] in the lunar soil. Multiple layers of differing hydrogen/water at different depths in the lunar regolith model are introduced to examine enhancement or suppression of leakage neutron flux. We find that the majority of leakage thermal and epithermal neutrons are produced in 25 cm to 75 cm deep from the lunar surface. Neutrons produced in the shallow top layer retain more of their original energy due to fewer scattering interactions and escape from the lunar surface mostly as fast neutrons. This provides a diagnostic tool in interpreting leakage neutron flux enhancement or suppression due to hydrogen concentration distribution in lunar regolith. We also find that the emitting angular distribution of thermal and epithermal leakage neutrons can be described by cos3/2(theta) where the fast neutrons emitting angular distribution is cos(theta). The energy sensitivity and angular response of the LEND detectors SETN and CSETN are investigated using the leakage neutron spectrum from GEANT4 simulations. A simplified LRO model is used to benchmark MCNPX[10] and GEANT4 on CSETN absolute count rate corresponding to neutron flux from bombardment of 120MV solar potential GCR particles on FAN lunar soil. We are able to interpret the count rates of SETN and

  12. Characterization and validation of a Monte Carlo code for independent dose calculation in proton therapy treatments with pencil beam scanning

    NASA Astrophysics Data System (ADS)

    Fracchiolla, F.; Lorentini, S.; Widesott, L.; Schwarz, M.

    2015-11-01

    We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ -Passing Rate (PR)  >  95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ -PR  >  93%) than the one coming from TPS (γ -PR  <  88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.

  13. Characterization and validation of a Monte Carlo code for independent dose calculation in proton therapy treatments with pencil beam scanning.

    PubMed

    Fracchiolla, F; Lorentini, S; Widesott, L; Schwarz, M

    2015-11-01

    We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ-Passing Rate (PR)  >  95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ-PR  >  93%) than the one coming from TPS (γ-PR  <  88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process.

  14. Thermodynamic properties of van der Waals fluids from Monte Carlo simulations and perturbative Monte Carlo theory

    NASA Astrophysics Data System (ADS)

    Díez, A.; Largo, J.; Solana, J. R.

    2006-08-01

    Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.

  15. Characterization and validation of a Monte Carlo code for independent dose calculation in proton therapy treatments with pencil beam scanning.

    PubMed

    Fracchiolla, F; Lorentini, S; Widesott, L; Schwarz, M

    2015-11-01

    We propose a method of creating and validating a Monte Carlo (MC) model of a proton Pencil Beam Scanning (PBS) machine using only commissioning measurements and avoiding the nozzle modeling. Measurements with a scintillating screen coupled with a CCD camera, ionization chamber and a Faraday Cup were used to model the beam in TOPAS without using any machine parameter information but the virtual source distance from the isocenter. Then the model was validated on simple Spread Out Bragg Peaks (SOBP) delivered in water phantom and with six realistic clinical plans (many involving 3 or more fields) on an anthropomorphic phantom. In particular the behavior of the moveable Range Shifter (RS) feature was investigated and its modeling has been proposed. The gamma analysis (3%,3 mm) was used to compare MC, TPS (XiO-ELEKTA) and measured 2D dose distributions (using radiochromic film). The MC modeling proposed here shows good results in the validation phase, both for simple irradiation geometry (SOBP in water) and for modulated treatment fields (on anthropomorphic phantoms). In particular head lesions were investigated and both MC and TPS data were compared with measurements. Treatment plans with no RS always showed a very good agreement with both of them (γ-Passing Rate (PR)  >  95%). Treatment plans in which the RS was needed were also tested and validated. For these treatment plans MC results showed better agreement with measurements (γ-PR  >  93%) than the one coming from TPS (γ-PR  <  88%). This work shows how to simplify the MC modeling of a PBS machine for proton therapy treatments without accounting for any hardware components and proposes a more reliable RS modeling than the one implemented in our TPS. The validation process has shown how this code is a valid candidate for a completely independent treatment plan dose calculation algorithm. This makes the code an important future tool for the patient specific QA verification process. PMID

  16. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-04-25

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  17. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  18. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  19. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  20. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  1. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  2. Monte Carlo radiation transport¶llelism

    SciTech Connect

    Cox, L. J.; Post, S. E.

    2002-01-01

    This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

  3. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  4. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  5. Quantum Monte Carlo calculations for light nuclei.

    SciTech Connect

    Wiringa, R. B.

    1998-10-23

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  6. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  7. Kinetic Monte Carlo simulations of proton conductivity

    NASA Astrophysics Data System (ADS)

    Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.

    2014-07-01

    The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.

  8. Discovering correlated fermions using quantum Monte Carlo.

    PubMed

    Wagner, Lucas K; Ceperley, David M

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859

  9. Quantum Monte Carlo : not just for energy levels.

    SciTech Connect

    Nollett, K. M.; Physics

    2007-01-01

    Quantum Monte Carlo and realistic interactions can provide well-motivated vertices and overlaps for DWBA analyses of reactions. Given an interaction in vaccum, there are several computational approaches to nuclear systems, as you have been hearing: No-core shell model with Lee-Suzuki or Bloch-Horowitz for Hamiltonian Coupled clusters with G-matrix interaction Density functional theory, granted an energy functional derived from the interaction Quantum Monte Carlo - Variational Monte Carlo Green's function Monte Carlo. The last two work directly with a bare interaction and bare operators and describe the wave function without expanding in basis functions, so they have rather different sets of advantages and disadvantages from the others. Variational Monte Carlo (VMC) is built on a sophisticated Ansatz for the wave function, built on shell model like structure modified by operator correlations. Green's function Monte Carlo (GFMC) uses an operator method to project the true ground state out of a reasonable guess wave function.

  10. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  11. Neutron transport calculations using Quasi-Monte Carlo methods

    SciTech Connect

    Moskowitz, B.S.

    1997-07-01

    This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.

  12. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  13. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a

  14. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  15. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  16. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  17. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  18. Experimental Monte Carlo Quantum Process Certification

    NASA Astrophysics Data System (ADS)

    Steffen, Lars; Fedorov, Arkady; Baur, Matthias; Palmer da Silva, Marcus; Wallraff, Andreas

    2012-02-01

    Experimental implementations of quantum information processing have now reached a state, at which quantum process tomography starts to become impractical, since the number of experimental settings as well as the computational cost of the post processing required to extract the process matrix from the measurements scales exponentially with the number of qubits in the system. In order to determine the fidelity of an implemented process relative to the ideal one, a more practical approach called Monte Carlo quantum process certification was proposed in Ref. [1]. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup. Our system is realized with three superconducting transmon qubits coupled to a coplanar microwave resonator which is used for the joint-readout of the qubit states. We demonstrate an implementation of Monte Carlo quantum process certification and determine the fidelity of different two- and three-qubit gates such as cphase-, cnot-, 2cphase- and Toffoli-gates. The obtained results are compared with the values obtained from conventional process tomography and the errors of the obtained fidelities are determined. [4pt] [1] M. P. da Silva, O. Landon-Cardinal and D. Poulin, arXiv:1104.3835(2011)

  19. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  20. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  1. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  2. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  3. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  5. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  6. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGES

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  7. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  8. Monte Carlo methods in lattice gauge theories

    SciTech Connect

    Otto, S.W.

    1983-01-01

    The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.

  9. Monte Carlo techniques for analyzing deep-penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-02-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications.

  10. Monte Carlo modeling of spatial coherence: free-space diffraction.

    PubMed

    Fischer, David G; Prahl, Scott A; Duncan, Donald D

    2008-10-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  11. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  12. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  13. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  14. Monte Carlo Studies of Protein Aggregation

    NASA Astrophysics Data System (ADS)

    Jónsson, Sigurður Ægir; Staneva, Iskra; Mohanty, Sandipan; Irbäck, Anders

    The disease-linked amyloid β (Aβ) and α-synuclein (αS) proteins are both fibril-forming and natively unfolded in free monomeric form. Here, we discuss two recent studies, where we used extensive implicit solvent all-atom Monte Carlo (MC) simulations to elucidate the conformational ensembles sampled by these proteins. For αS, we somewhat unexpectedly observed two distinct phases, separated by a clear free-energy barrier. The presence of the barrier makes αS, with 140 residues, a challenge to simulate. By using a two-step simulation procedure based on flat-histogram techniques, it was possible to alleviate this problem. The barrier may in part explain why fibril formation is much slower for αS than it is for Aβ

  15. Nuclear reactions in Monte Carlo codes.

    PubMed

    Ferrari, A; Sala, P R

    2002-01-01

    The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.

  16. Vectorization of Monte Carlo particle transport

    SciTech Connect

    Burns, P.J.; Christon, M.; Schweitzer, R.; Lubeck, O.M.; Wasserman, H.J.; Simmons, M.L.; Pryor, D.V. . Computer Center; Los Alamos National Lab., NM; Supercomputing Research Center, Bowie, MD )

    1989-01-01

    Fully vectorized versions of the Los Alamos National Laboratory benchmark code Gamteb, a Monte Carlo photon transport algorithm, were developed for the Cyber 205/ETA-10 and Cray X-MP/Y-MP architectures. Single-processor performance measurements of the vector and scalar implementations were modeled in a modified Amdahl's Law that accounts for additional data motion in the vector code. The performance and implementation strategy of the vector codes are related to architectural features of each machine. Speedups between fifteen and eighteen for Cyber 205/ETA-10 architectures, and about nine for CRAY X-MP/Y-MP architectures are observed. The best single processor execution time for the problem was 0.33 seconds on the ETA-10G, and 0.42 seconds on the CRAY Y-MP. 32 refs., 12 figs., 1 tab.

  17. Monte Carlo stratified source-sampling

    SciTech Connect

    Blomquist, R.N.; Gelbard, E.M.

    1997-09-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo {open_quotes}eigenvalue of the world{close_quotes} problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress.

  18. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  19. MORSE Monte Carlo radiation transport code system

    SciTech Connect

    Emmett, M.B.

    1983-02-01

    This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

  20. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-12-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.

  1. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  2. Coherent scatter imaging Monte Carlo simulation.

    PubMed

    Hassan, Laila; MacDonald, Carolyn A

    2016-07-01

    Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397

  3. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  4. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  5. Total Monte Carlo evaluation for dose calculations.

    PubMed

    Sjöstrand, H; Alhassan, E; Conroy, S; Duan, J; Hellesen, C; Pomp, S; Österlund, M; Koning, A; Rochman, D

    2014-10-01

    Total Monte Carlo (TMC) is a method to propagate nuclear data (ND) uncertainties in transport codes, by using a large set of ND files, which covers the ND uncertainty. The transport code is run multiple times, each time with a unique ND file, and the result is a distribution of the investigated parameter, e.g. dose, where the width of the distribution is interpreted as the uncertainty due to ND. Until recently, this was computer intensive, but with a new development, fast TMC, more applications are accessible. The aim of this work is to test the fast TMC methodology on a dosimetry application and to propagate the (56)Fe uncertainties on the predictions of the dose outside a proposed 14-MeV neutron facility. The uncertainty was found to be 4.2 %. This can be considered small; however, this cannot be generalised to all dosimetry applications and so ND uncertainties should routinely be included in most dosimetry modelling.

  6. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  7. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  8. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  9. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  10. QWalk: A quantum Monte Carlo program for electronic structure

    SciTech Connect

    Wagner, Lucas K. Bajdich, Michal Mitas, Lubos

    2009-05-20

    We describe QWalk, a new computational package capable of performing quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site (http://www.qwalk.org)

  11. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  12. Recent Developments in Quantum Monte Carlo: Methods and Applications

    NASA Astrophysics Data System (ADS)

    Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.

    2007-12-01

    The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.

  13. Adjoint electron-photon transport Monte Carlo calculations with ITS

    SciTech Connect

    Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.; Morel, J.E.

    1995-02-01

    A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.

  14. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  15. Quantum Monte Carlo using a Stochastic Poisson Solver

    SciTech Connect

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  16. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  17. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  18. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  19. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  20. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  1. Biopolymer structure simulation and optimization via fragment regrowth Monte Carlo.

    PubMed

    Zhang, Jinfeng; Kou, S C; Liu, Jun S

    2007-06-14

    An efficient exploration of the configuration space of a biopolymer is essential for its structure modeling and prediction. In this study, the authors propose a new Monte Carlo method, fragment regrowth via energy-guided sequential sampling (FRESS), which incorporates the idea of multigrid Monte Carlo into the framework of configurational-bias Monte Carlo and is suitable for chain polymer simulations. As a by-product, the authors also found a novel extension of the Metropolis Monte Carlo framework applicable to all Monte Carlo computations. They tested FRESS on hydrophobic-hydrophilic (HP) protein folding models in both two and three dimensions. For the benchmark sequences, FRESS not only found all the minimum energies obtained by previous studies with substantially less computation time but also found new lower energies for all the three-dimensional HP models with sequence length longer than 80 residues.

  2. An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media

    NASA Astrophysics Data System (ADS)

    Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.

  3. Monte Carlo characterization of skin doses in 6 MV transverse field MRI-linac systems: Effect of field size, surface orientation, magnetic field strength, and exit bolus

    SciTech Connect

    Oborn, B. M.; Metcalfe, P. E.; Butson, M. J.; Rosenfeld, A. B.

    2010-10-15

    Purpose: The main focus of this work is to continue investigations into the Monte Carlo predicted skin doses seen in MRI-guided radiotherapy. In particular, the authors aim to characterize the 70 {mu}m skin doses over a larger range of magnetic field strength and x-ray field size than in the current literature. The effect of surface orientation on both the entry and exit sides is also studied. Finally, the use of exit bolus is also investigated for minimizing the negative effects of the electron return effect (ERE) on the exit skin dose. Methods: High resolution GEANT4 Monte Carlo simulations of a water phantom exposed to a 6 MV x-ray beam (Varian 2100C) have been performed. Transverse magnetic fields of strengths between 0 and 3 T have been applied to a 30x30x20 cm{sup 3} phantom. This phantom is also altered to have variable entry and exit surfaces with respect to the beam central axis and they range from -75 deg. to +75 deg. The exit bolus simulated is a 1 cm thick (water equivalent) slab located on the beam exit side. Results: On the entry side, significant skin doses at the beam central axis are reported for large positive surface angles and strong magnetic fields. However, over the entry surface angle range of -30 deg. to -60 deg., the entry skin dose is comparable to or less than the zero magnetic field skin dose, regardless of magnetic field strength and field size. On the exit side, moderate to high central axis skin dose increases are expected except at large positive surface angles. For exit bolus of 1 cm thickness, the central axis exit skin dose becomes an almost consistent value regardless of magnetic field strength or exit surface angle. This is due to the almost complete absorption of the ERE electrons by the bolus. Conclusions: There is an ideal entry angle range of -30 deg. to -60 deg. where entry skin dose is comparable to or less than the zero magnetic field skin dose. Other than this, the entry skin dose increases are significant, especially at

  4. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods: A Case Study at the South Oyster Bacterial Transport Site in Virginia

    SciTech Connect

    Chen, Jinsong; Hubbard, Susan; Rubin, Yoram; Murray, Christopher J.; Roden, Eric E.; Majer, Ernest L.

    2004-12-22

    The paper demonstrates the use of ground-penetrating radar (GPR) tomographic data for estimating extractable Fe(II) and Fe(III) concentrations using a Markov chain Monte Carlo (MCMC) approach, based on data collected at the DOE South Oyster Bacterial Transport Site in Virginia. Analysis of multidimensional data including physical, geophysical, geochemical, and hydrogeological measurements collected at the site shows that GPR attenuation and lithofacies are most informative for the estimation. A statistical model is developed for integrating the GPR attenuation and lithofacies data. In the model, lithofacies is considered as a spatially correlated random variable and petrophysical models for linking GPR attenuation to geochemical parameters were derived from data at and near boreholes. Extractable Fe(II) and Fe(III) concentrations at each pixel between boreholes are estimated by conditioning to the co-located GPR data and the lithofacies measurements along boreholes through spatial correlation. Cross-validation results show that geophysical data, constrained by lithofacies, provided information about extractable Fe(II) and Fe(III) concentration in a minimally invasive manner and with a resolution unparalleled by other geochemical characterization methods. The developed model is effective and flexible, and should be applicable for estimating other geochemical parameters at other sites.

  5. Application of Direct Simulation Monte Carlo to Satellite Contamination Studies

    NASA Technical Reports Server (NTRS)

    Rault, Didier F. G.; Woronwicz, Michael S.

    1995-01-01

    A novel method is presented to estimate contaminant levels around spacecraft and satellites of arbitrarily complex geometry. The method uses a three-dimensional direct simulation Monte Carlo algorithm to characterize the contaminant cloud surrounding the space platform, and a computer-assisted design preprocessor to define the space-platform geometry. The method is applied to the Upper Atmosphere Research Satellite to estimate the contaminant flux incident on the optics of the halogen occultation experiment (HALOE) telescope. Results are presented in terms of contaminant cloud structure, molecular velocity distribution at HALOE aperture, and code performance.

  6. Recent advances and future prospects for Monte Carlo

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  7. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  8. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    SciTech Connect

    Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.

    2013-02-15

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.

  9. Coupled Electron-Ion Monte Carlo calculations of atomic hydrogen

    NASA Astrophysics Data System (ADS)

    Holzmann, Markus; Pierleoni, Carlo; Ceperley, David M.

    2005-07-01

    We present a new Monte Carlo method which couples Path Integral for finite temperature protons with Quantum Monte Carlo for ground state electrons, and we apply it to metallic hydrogen for pressures beyond molecular dissociation. This method fills the gap between high temperature electron-proton Path Integral and ground state Diffusion Monte Carlo methods. Our data exhibit more structure and higher melting temperatures of the proton crystal than Car-Parrinello Molecular Dynamics results using LDA. We further discuss the quantum motion of the protons and the zero temperature limit.

  10. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  11. Diffusion Monte Carlo in internal coordinates.

    PubMed

    Petit, Andrew S; McCoy, Anne B

    2013-08-15

    An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.

  12. Biofilm growth: a lattice Monte Carlo model

    NASA Astrophysics Data System (ADS)

    Tao, Yuguo; Slater, Gary

    2011-03-01

    Biofilms are complex colonies of bacteria that grow in contact with a wall, often in the presence of a flow. In the current work, biofilm growth is investigated using a new two-dimensional lattice Monte Carlo algorithm based on the Bond-Fluctuation Algorithm (BFA). One of the distinguishing characteristics of biofilms, the synthesis and physical properties of the extracellular polymeric substance (EPS) in which the cells are embedded, is explicitly taken into account. Cells are modelled as autonomous closed loops with well-defined mechanical and thermodynamic properties, while the EPS is modelled as flexible polymeric chains. This BFA model allows us to add biologically relevant features such as: the uptake of nutrients; cell growth, division and death; the production of EPS; cell maintenance and hibernation; the generation of waste and the impact of toxic molecules; cell mutation and evolution; cell motility. By tuning the structural, interactional and morphologic parameters of the model, the cell shapes as well as the growth and maturation of various types of biofilm colonies can be controlled.

  13. Monte Carlo Approach To Gomos Ozone Retrieval

    NASA Astrophysics Data System (ADS)

    Tamminen, J.; Kyrölä, E.

    Satellite measurements of the atmosphere are non-direct and therefore the data pro- cessing requires inverse methods. In this paper we apply the Bayesian approach and use the Markov chain Monte Carlo (MCMC) method for solving the retrieval problem of GOMOS mesurements. With the MCMC method we are able to compute the true nonlinear posterior distribution of the solution without linearizing the problem. The MCMC technique can easily be implemented in a great variety of retrieval prob- lems including nonlinear problems with various prior or noise structures. Therefore, MCMC methods, though somewhat slow for operational processing of large amounts of data, provide excellent tools for development and validation purposes. Moreover, when the signal-to-noise ratio is poor the MCMC methods can be used to find even the faintest fingerprints of the absorbers in the signal. The MCMC methods, and especially the reversible jump MCMC can also be used in problems where the dimension of the model space is unknown. We will discuss the possibility of using MCMC approach also in a model selection problem, namely, for choosing the model for the wavelength dependence of the aerosol cross sections and studying the optimal constituent set to be retrieved.

  14. Monte Carlo Simulation of River Meander Modelling

    NASA Astrophysics Data System (ADS)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  15. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  16. Monte Carlo simulation of a quantized universe.

    NASA Astrophysics Data System (ADS)

    Berger, Beverly K.

    1988-08-01

    A Monte Carlo simulation method which yields groundstate wave functions for multielectron atoms is applied to quantized cosmological models. In quantum mechanics, the propagator for the Schrödinger equation reduces to the absolute value squared of the groundstate wave function in the limit of infinite Euclidean time. The wave function of the universe as the solution to the Wheeler-DeWitt equation may be regarded as the zero energy mode of a Schrödinger equation in coordinate time. The simulation evaluates the path integral formulation of the propagator by constructing a large number of paths and computing their contribution to the path integral using the Metropolis algorithm to drive the paths toward a global minimum in the path energy. The result agrees with a solution to the Wheeler-DeWitt equation which has the characteristics of a nodeless groundstate wave function. Oscillatory behavior cannot be reproduced although the simulation results may be physically reasonable. The primary advantage of the simulations is that they may easily be extended to cosmologies with many degrees of freedom. Examples with one, two, and three degrees of freedom (d.f.) are presented.

  17. Monte Carlo Production Management at CMS

    NASA Astrophysics Data System (ADS)

    Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.

    2015-12-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.

  18. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  19. Realistic Monte Carlo Simulation of PEN Apparatus

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2015-04-01

    The PEN collaboration undertook to measure the π+ -->e+νe(γ) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.

  20. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  1. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  2. Bulk characterization in a Monte Carlo particle-deposition model with a novel adherence-potential barrier

    NASA Astrophysics Data System (ADS)

    Galindo, Jose Luis; Huertas, Rafael; Carrasco-Sanz, Ana; Lapresta, Alejandro; Galindo, Jorge; Vasco, Enrique

    2016-07-01

    The aim of this work is to analyze in more depth a model of particle deposition by characterizing different parameters such as profile density, bonds and perimeter, and substrate coverage, all being involved in the description of deposits as bulk. Thus, this study is an extension of a previous work on non-equilibrium interface-growth systems where two different interface-growth models, called Standard Adherence Rule Model and Potential Adherence Rule Model, were characterized. In this work, bulk characterization is implemented for the complete range of Peclet numbers. The zones of density profile (Near-Wall, Plateau, and Active-Growth) are studied by proposing an adjustment for each of them and determining the full-setting density profile depending on the Peclet number. The density profiles are compared with other one- and two-stage models. Furthermore, an algorithm is proposed to calculate the number of bonds of the particles and the perimeter that a substrate forms over time. Finally, to analyze the coating, its temporal behavior is adjusted to an exponential function by comparing the results with those found for Random Sequential Adsorption models which describe systems like colloidal particles on solid substrates, adsorption of proteins at mineral surfaces, or oxidation of one-dimensional polymer chains.

  3. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  4. Properties of reactive oxygen species by quantum Monte Carlo.

    PubMed

    Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles. PMID:25005287

  5. Properties of reactive oxygen species by quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  6. Monte Carlo techniques for analyzing deep penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  7. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  8. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  9. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  10. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    SciTech Connect

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.

  11. Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

    SciTech Connect

    Bekar, Kursat B; Celik, Cihangir; Wiarda, Dorothea; Peplow, Douglas E.; Rearden, Bradley T; Dunn, Michael E

    2013-01-01

    Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

  12. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  13. A Particle Population Control Method for Dynamic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

    2014-06-01

    A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

  14. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  15. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  16. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  17. de Finetti Priors using Markov chain Monte Carlo computations

    PubMed Central

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-01-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947

  18. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  19. Monte Carlo study of microdosimetric diamond detectors

    NASA Astrophysics Data System (ADS)

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-01

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  20. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy. PMID:26309235

  1. Monte Carlo simulation of large electron fields.

    PubMed

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  2. Monte Carlo simulation of large electron fields

    PubMed Central

    Faddegon, Bruce A; Perl, Joseph; Asai, Makoto

    2010-01-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different “physics lists,” were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the 6 electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the buildup region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy. PMID:18296775

  3. Monte Carlo simulation of large electron fields

    NASA Astrophysics Data System (ADS)

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  4. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  5. Monte Carlo simulations for spinodal decomposition

    SciTech Connect

    Sander, E.; Wanner, T.

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

  6. Monte carlo sampling of fission multiplicity.

    SciTech Connect

    Hendricks, J. S.

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  7. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  8. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  9. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  10. Perturbation Monte Carlo methods for tissue structure alterations.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  11. Fourier Monte Carlo renormalization-group approach to crystalline membranes.

    PubMed

    Tröster, A

    2015-02-01

    The computation of the critical exponent η characterizing the universal elastic behavior of crystalline membranes in the flat phase continues to represent challenges to theorists as well as computer simulators that manifest themselves in a considerable spread of numerical results for η published in the literature. We present additional insight into this problem that results from combining Wilson's momentum shell renormalization-group method with the power of modern computer simulations based on the Fourier Monte Carlo algorithm. After discussing the ideas and difficulties underlying this combined scheme, we present a calculation of the renormalization-group flow of the effective two-dimensional Young modulus for momentum shells of different thickness. Extrapolation to infinite shell thickness allows us to produce results in reasonable agreement with those obtained by functional renormalization group or by Fourier Monte Carlo simulations in combination with finite-size scaling. Moreover, our method allows us to obtain a decent estimate for the value of the Wegner exponent ω that determines the leading correction to scaling, which in turn allows us to refine our numerical estimate for η previously obtained from precise finite-size scaling data.

  12. Russian roulette efficiency in Monte Carlo resonant absorption calculations

    PubMed

    Ghassoun; Jehouani

    2000-10-01

    The resonant absorption calculation in media containing heavy resonant nuclei is one of the most difficult problems treated in reactor physics. Deterministic techniques need many approximations to solve this kind of problem. On the other hand, the Monte Carlo method is a reliable mathematical tool for evaluating the neutron resonance escape probability. But it suffers from large statistical deviations of results and long computation times. In order to overcome this problem, we have used the Splitting and Russian Roulette technique coupled separately to the survival biasing and to the importance sampling for the energy parameter. These techniques have been used to calculate the neutron resonance absorption in infinite homogenous media containing hydrogen and uranium characterized by the dilution (ratio of the concentrations of hydrogen to uranium). The punctual neutron source energy is taken at Es = 2 MeV and Es = 676.45 eV, whereas the energy cut-off is fixed at Ec = 2.768 eV. The results show a large reduction of computation time and statistical deviation, without altering the mean resonance escape probability compared to the usual analog simulation. The Splitting and Russian Roulette coupled to the survival biasing method is found to be the best methods for studying the neutron resonant absorption, particularly for high energies. A comparison is done between the Monte Carlo and deterministic methods based on the numerical solution of the neutron slowing down equations by the iterative method results for several dilutions.

  13. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  14. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  15. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility

    SciTech Connect

    Barrera, C A; Moran, M J

    2007-08-21

    The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS

  16. Characterization of spinal cord white matter by suppressing signal from hindered space. A Monte Carlo simulation and an ex vivo ultrahigh-b diffusion-weighted imaging study

    NASA Astrophysics Data System (ADS)

    Sapkota, Nabraj; Yoon, Sook; Thapa, Bijaya; Lee, YouJung; Bisson, Erica F.; Bowman, Beth M.; Miller, Scott C.; Shah, Lubdha M.; Rose, John W.; Jeong, Eun-Kee

    2016-11-01

    Signal measured from white matter in diffusion-weighted imaging is difficult to interpret because of the heterogeneous structure of white matter. Characterization of the white matter will be straightforward if the signal contributed from the hindered space is suppressed and purely restricted signal is analyzed. In this study, a Monte Carlo simulation (MCS) of water diffusion in white matter was performed to understand the behavior of the diffusion-weighted signal in white matter. The signal originating from the hindered space of an excised pig cervical spinal cord white matter was suppressed using the ultrahigh-b radial diffusion-weighted imaging. A light microscopy image of a section of white matter was obtained from the excised pig cervical spinal cord for the MCS. The radial diffusion-weighted signals originating from each of the intra-axonal, extra-axonal, and total spaces were studied using the MCS. The MCS predicted that the radial diffusion-weighted signal remains almost constant in the intra-axonal space and decreases gradually to about 2% of its initial value in the extra-axonal space when the b-value is increased to 30,000 s /mm2 . The MCS also revealed that the diffusion-weighted signal for a b-value greater than 20,000 s /mm2 is mostly from the intra-axonal space. The decaying behavior of the signal-b curve obtained from ultrahigh-b diffusion-weighted imaging (bmax ∼ 30,000 s /mm2) of the excised pig cord was very similar to the decaying behavior of the total signal-b curve synthesized in the MCS. A mono-exponential plus constant fitting of the signal-b curve obtained from a white matter pixel estimated the values of constant fraction and apparent diffusion coefficient of decaying fraction as 0.32 ± 0.05 and (0.16 ± 0.01) × 10-3 mm2/s, respectively, which agreed well with the results of the MCS. The signal measured in the ultrahigh-b region (b > 20,000 s/mm2) is mostly from the restricted (intra-axonal) space. Integrity and intactness of the axons

  17. BOMAB phantom manufacturing quality assurance study using Monte Carlo computations

    SciTech Connect

    Mallett, M.W.

    1994-01-01

    Monte Carlo calculations have been performed to assess the importance of and quantify quality assurance protocols in the manufacturing of the Bottle-Manikin-Absorption (BOMAB) phantom for calibrating in vivo measurement systems. The parameters characterizing the BOMAB phantom that were examined included height, fill volume, fill material density, wall thickness, and source concentration. Transport simulation was performed for monoenergetic photon sources of 0.200, 0.662, and 1,460 MeV. A linear response was observed in the photon current exiting the exterior surface of the BOMAB phantom due to variations in these parameters. Sensitivity studies were also performed for an in vivo system in operation at the Pacific Northwest Laboratories in Richland, WA. Variations in detector current for this in vivo system are reported for changes in the BOMAB phantom parameters studied here. Physical justifications for the observed results are also discussed.

  18. Anomalous diffusion due to obstacles: a Monte Carlo study.

    PubMed Central

    Saxton, M J

    1994-01-01

    In normal lateral diffusion, the mean-square displacement of the diffusing species is proportional to time. But in disordered systems anomalous diffusion may occur, in which the mean-square displacement is proportional to some other power of time. In the presence of moderate concentrations of obstacles, diffusion is anomalous over short distances and normal over long distances. Monte Carlo calculations are used to characterize anomalous diffusion for obstacle concentrations between zero and the percolation threshold. As the obstacle concentration approaches the percolation threshold, diffusion becomes more anomalous over longer distances; the anomalous diffusion exponent and the crossover length both increase. The crossover length and time show whether anomalous diffusion can be observed in a given experiment. PMID:8161693

  19. Replica exchange Monte Carlo applied to hard spheres.

    PubMed

    Odriozola, Gerardo

    2009-10-14

    In this work a replica exchange Monte Carlo scheme which considers an extended isobaric-isothermal ensemble with respect to pressure is applied to study hard spheres (HSs). The idea behind the proposal is expanding volume instead of increasing temperature to let crowded systems characterized by dominant repulsive interactions to unblock, and so, to produce sampling from disjoint configurations. The method produces, in a single parallel run, the complete HS equation of state. Thus, the first order fluid-solid transition is captured. The obtained results well agree with previous calculations. This approach seems particularly useful to treat purely entropy-driven systems such as hard body and nonadditive hard mixtures, where temperature plays a trivial role.

  20. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  1. An unbiased Hessian representation for Monte Carlo PDFs

    NASA Astrophysics Data System (ADS)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    2015-08-01

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  2. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    SciTech Connect

    Baba, Justin S; Koju, Vijay; John, Dwayne O

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.

  3. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  4. Efficiency of Monte Carlo sampling in chaotic systems.

    PubMed

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  5. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  6. Domain decomposition methods for a parallel Monte Carlo transport code

    SciTech Connect

    Alme, H J; Rodrigue, G H; Zimmerman, G B

    1999-01-27

    Achieving parallelism in simulations that use Monte Carlo transport methods presents interesting challenges. For problems that require domain decomposition, load balance can be harder to achieve. The Monte Carlo transport package may have to operate with other packages that have different optimal domain decompositions for a given problem. To examine some of these issues, we have developed a code that simulates the interaction of a laser with biological tissue; it uses a Monte Carlo method to simulate the laser and a finite element model to simulate the conduction of the temperature field in the tissue. We will present speedup and load balance results obtained for a suite of problems decomposed using a few domain decomposition algorithms we have developed.

  7. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  8. Application of biasing techniques to the contributon Monte Carlo method

    SciTech Connect

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  9. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  10. Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems

    NASA Astrophysics Data System (ADS)

    Svistunov, Boris

    2013-03-01

    In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA

  11. Monte Carlo simulations of phosphate polyhedron connectivity in glasses

    SciTech Connect

    ALAM,TODD M.

    2000-01-01

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  12. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  13. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  14. Collective translational and rotational Monte Carlo moves for attractive particles

    NASA Astrophysics Data System (ADS)

    Růžička, Štěpán; Allen, Michael P.

    2014-03-01

    Virtual move Monte Carlo is a Monte Carlo (MC) cluster algorithm forming clusters via local energy gradients and approximating the collective kinetic or dynamic motion of attractive colloidal particles. We carefully describe, analyze, and test the algorithm. To formally validate the algorithm through highlighting its symmetries, we present alternative and compact ways of selecting and accepting clusters which illustrate the formal use of abstract concepts in the design of biased MC techniques: the superdetailed balance and the early rejection scheme. A brief and comprehensive summary of the algorithms is presented, which makes them accessible without needing to understand the details of the derivation.

  15. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  16. Monte Carlo calculation of monitor unit for electron arc therapy

    SciTech Connect

    Chow, James C. L.; Jiang Runqing

    2010-04-15

    Purpose: Monitor unit (MU) calculations for electron arc therapy were carried out using Monte Carlo simulations and verified by measurements. Variations in the dwell factor (DF), source-to-surface distance (SSD), and treatment arc angle ({alpha}) were studied. Moreover, the possibility of measuring the DF, which requires gantry rotation, using a solid water rectangular, instead of cylindrical, phantom was investigated. Methods: A phase space file based on the 9 MeV electron beam with rectangular cutout (physical size=2.6x21 cm{sup 2}) attached to the block tray holder of a Varian 21 EX linear accelerator (linac) was generated using the EGSnrc-based Monte Carlo code and verified by measurement. The relative output factor (ROF), SSD offset, and DF, needed in the MU calculation, were determined using measurements and Monte Carlo simulations. An ionization chamber, a radiographic film, a solid water rectangular phantom, and a cylindrical phantom made of polystyrene were used in dosimetry measurements. Results: Percentage deviations of ROF, SSD offset, and DF between measured and Monte Carlo results were 1.2%, 0.18%, and 1.5%, respectively. It was found that the DF decreased with an increase in {alpha}, and such a decrease in DF was more significant in the {alpha} range of 0 deg. - 60 deg. than 60 deg. - 120 deg. Moreover, for a fixed {alpha}, the DF increased with an increase in SSD. Comparing the DF determined using the rectangular and cylindrical phantom through measurements and Monte Carlo simulations, it was found that the DF determined by the rectangular phantom agreed well with that by the cylindrical one within {+-}1.2%. It shows that a simple setup of a solid water rectangular phantom was sufficient to replace the cylindrical phantom using our specific cutout to determine the DF associated with the electron arc. Conclusions: By verifying using dosimetry measurements, Monte Carlo simulations proved to be an alternative way to perform MU calculations effectively

  17. Optix: A Monte Carlo scintillation light transport code

    NASA Astrophysics Data System (ADS)

    Safari, M. J.; Afarideh, H.; Ghal-Eh, N.; Davani, F. Abbasi

    2014-02-01

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements.

  18. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  19. Monte Carlo simulation of electrons in dense gases

    NASA Astrophysics Data System (ADS)

    Tattersall, Wade; Boyle, Greg; Cocks, Daniel; Buckman, Stephen; White, Ron

    2014-10-01

    We implement a Monte-Carlo simulation modelling the transport of electrons and positrons in dense gases and liquids, by using a dynamic structure factor that allows us to construct structure-modified effective cross sections. These account for the coherent effects caused by interactions with the relatively dense medium. The dynamic structure factor also allows us to model thermal gases in the same manner, without needing to directly sample the velocities of the neutral particles. We present the results of a series of Monte Carlo simulations that verify and apply this new technique, and make comparisons with macroscopic predictions and Boltzmann equation solutions. Financial support of the Australian Research Council.

  20. Analytic Monte Carlo score distributions for future statistical confidence interval studies

    SciTech Connect

    Booth, T.E. )

    1992-10-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large score sampling from the score distribution's tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. The analytic score distribution for geometry splitting/Russian roulette applied to a simple Monte Carlo problem and the analytic score distribution for the exponential transform applied to the same Monte Carlo problem are provided in this paper.

  1. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed. PMID:26520496

  2. Observations on variational and projector Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Umrigar, C. J.

    2015-10-01

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  3. Reagents for Electrophilic Amination: A Quantum Monte CarloStudy

    SciTech Connect

    Amador-Bedolla, Carlos; Salomon-Ferrer, Romelia; Lester Jr.,William A.; Vazquez-Martinez, Jose A.; Aspuru-Guzik, Alan

    2006-11-01

    Electroamination is an appealing synthetic strategy toconstruct carbon-nitrogen bonds. We explore the use of the quantum MonteCarlo method and a proposed variant of the electron-pair localizationfunction--the electron-pair localization function density--as a measureof the nucleophilicity of nitrogen lone-pairs as a possible screeningprocedure for electrophilic reagents.

  4. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  5. Monte Carlo study of TLD measurements in air cavities.

    PubMed

    Haraldsson, Pia; Knöös, Tommy; Nyström, Håkan; Engström, Per

    2003-09-21

    Thermoluminescent dosimeters (TLDs) are used for verification of the delivered dose during IMRT treatment of head and neck carcinomas. The TLDs are put into a plastic tube, which is placed in the nasal cavities through the treated volume. In this study, the dose distribution to a phantom having a cylindrical air cavity containing a tube was calculated by Monte Carlo methods and the results were compared with data from a treatment planning system (TPS) to evaluate the accuracy of the TLD measurements. The phantom was defined in the DOSXYZnrc Monte Carlo code and calculations were performed with 6 MV fields, with the TLD tube placed at different positions within the cylindrical air cavity. A similar phantom was defined in the pencil beam based TPS. Differences between the Monte Carlo and the TPS calculations of the absorbed dose to the TLD tube were found to be small for an open symmetrical field. For a half-beam field through the air cavity, there was a larger discrepancy. Furthermore, dose profiles through the cylindrical air cavity show, as expected, that the treatment planning system overestimates the absorbed dose in the air cavity. This study shows that when using an open symmetrical field, Monte Carlo calculations of absorbed doses to a TLD tube in a cylindrical air cavity give results comparable to a pencil beam based treatment planning system.

  6. Calibration and Monte Carlo modelling of neutron long counters

    NASA Astrophysics Data System (ADS)

    Tagziria, Hamid; Thomas, David J.

    2000-10-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivity of the Monte Carlo calculations for the efficiency of the De Pangher long counter to perturbations in density and cross-section of the polyethylene used in the construction has been investigated.

  7. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  8. Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm

    NASA Astrophysics Data System (ADS)

    Gubernatis, James

    2014-03-01

    A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.

  9. Shifted-Contour Monte Carlo Method for Nuclear Structure

    SciTech Connect

    Stoitcheva, G.S.; Dean, D.J.

    2004-09-13

    We propose a new approach for alleviating the 'sign' problem in the nuclear shell model Monte Carlo method. The approach relies on modifying the integration contour of the Hubbard-Stratonovich transformation to pass through an imaginary stationary point in the auxiliary-field associated with the Hartree-Fock density.

  10. Monte Carlo shipping cask calculations using an automated biasing procedure

    SciTech Connect

    Tang, J.S.; Hoffman, T.J.; Childs, R.L.; Parks, C.V.

    1983-01-01

    This paper describes an automated biasing procedure for Monte Carlo shipping cask calculations within the SCALE system - a modular code system for Standardized Computer Analysis for Licensing Evaluation. The SCALE system was conceived and funded by the US Nuclear Regulatory Commission to satisfy a strong need for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems.

  11. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  12. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  13. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  14. Microbial contamination in poultry chillers estimated by Monte Carlo simulations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...

  15. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  16. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  17. Observations on variational and projector Monte Carlo methods

    SciTech Connect

    Umrigar, C. J.

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  18. Monte Carlo Simulations of Light Propagation in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...

  19. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  20. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  1. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  2. Monte Carlo event generators for hadron-hadron collisions

    SciTech Connect

    Knowles, I.G.; Protopopescu, S.D.

    1993-06-01

    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  3. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  4. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; et al

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  5. Quantum Monte Carlo simulation with a black hole

    NASA Astrophysics Data System (ADS)

    Benić, Sanjin; Yamamoto, Arata

    2016-05-01

    We perform quantum Monte Carlo simulations in the background of a classical black hole. The lattice discretized path integral is numerically calculated in the Schwarzschild metric and in its approximated metric. We study spontaneous symmetry breaking of a real scalar field theory. We observe inhomogeneous symmetry breaking induced by an inhomogeneous gravitational field.

  6. Parallel Monte Carlo simulation of multilattice thin film growth

    NASA Astrophysics Data System (ADS)

    Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen

    2001-07-01

    This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.

  7. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  8. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  9. A Monte Carlo solution of heat conduction and Poisson equations

    SciTech Connect

    Grigoriu, M.

    2000-02-01

    A Monte Carlo method is developed for solving the heat conduction, Poisson, and Laplace equations. The method is based on properties of Brownian motion and Ito processes, the Ito formula for differentiable functions of these processes, and the similarities between the generator of Ito processes and the differential operators of these equations. The proposed method is similar to current Monte Carlo solutions, such as the fixed random walk, exodus, and floating walk methods, in the sense that it is local, that is, it determines the solution at a single point or a small set of points of the domain of definition of the heat conduction equation directly. However, the proposed and the current Monte Carlo solutions are based on different theoretical considerations. The proposed Monte Carlo method has some attractive features. The method does not require to discretize the domain of definition of the differential equation, can be applied to domains of any dimension and geometry, works for both Dirichlet and Neumann boundary conditions, and provides simple solutions for the steady-state and transient heat equations. Several examples are presented to illustrate the application of the proposed method and demonstrate its accuracy.

  10. Does standard Monte Carlo give justice to instantons?

    NASA Astrophysics Data System (ADS)

    Fucito, F.; Solomon, S.

    1984-01-01

    The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

  11. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  12. The performance of a hybrid analytical-Monte Carlo system response matrix in pinhole SPECT reconstruction.

    PubMed

    El Bitar, Z; Pino, F; Candela, C; Ros, D; Pavía, J; Rannou, F R; Ruibal, A; Aguiar, P

    2014-12-21

    It is well-known that in pinhole SPECT (single-photon-emission computed tomography), iterative reconstruction methods including accurate estimations of the system response matrix can lead to submillimeter spatial resolution. There are two different methods for obtaining the system response matrix: those that model the system analytically using an approach including an experimental characterization of the detector response, and those that make use of Monte Carlo simulations. Methods based on analytical approaches are faster and handle the statistical noise better than those based on Monte Carlo simulations, but they require tedious experimental measurements of the detector response. One suggested approach for avoiding an experimental characterization, circumventing the problem of statistical noise introduced by Monte Carlo simulations, is to perform an analytical computation of the system response matrix combined with a Monte Carlo characterization of the detector response. Our findings showed that this approach can achieve high spatial resolution similar to that obtained when the system response matrix computation includes an experimental characterization. Furthermore, we have shown that using simulated detector responses has the advantage of yielding a precise estimate of the shift between the point of entry of the photon beam into the detector and the point of interaction inside the detector. Considering this, it was possible to slightly improve the spatial resolution in the edge of the field of view.

  13. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  14. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  15. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  16. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  17. Stereology of backscatter electron images of etched surfaces for characterization of particle size distributions and volume fractions: Estimation of imaging bias via Monte Carlo simulations

    SciTech Connect

    Payton, E.J. Mills, M.J.

    2011-06-15

    On metallic specimens in which a secondary phase has been selectively removed by a chemical etchant, the use of backscatter electron (BSE) imaging yields images that are more readily segmented with image processing algorithms than other modes of imaging in the scanning electron microscope. The contrast mechanisms in this imaging mode, however, produce a bias in the observation of particle sizes and volume fractions due to the effects of the electron interaction volume in the specimen. This stereological bias is quantified using Monte Carlo (MC) simulation of backscatter images. It is observed that the overprojection of features with centroids residing beneath the plane of polish is largely canceled out by the reduced segmentation size of features with centroids residing above the plane of polish. - Research Highlights: {yields} Backscatter imaging of selectively-etched surfaces can facilitate segmentation. {yields} Backscatter imaging of voids is simulated to estimate imaging/observation biases. {yields} The biases are quantified and incorporated into the stereological calculation. {yields} Systematic errors and imaging biases are observed to counteract one another. {yields} Results are illustrated using a bimodal gamma prime distribution in a Ni superalloy.

  18. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  19. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  20. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  1. A Monte Carlo simulation approach for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal

    2016-04-01

    Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.

  2. A Monte Carlo paradigm for capillarity in porous media

    SciTech Connect

    Lu, Ning; Zeidman, Benjamin D.; Lusk, Mark T.; Willson, Clinton S.; Wu, David T.

    2011-08-09

    Wet porous media are ubiquitous in nature as soils, rocks, plants, and bones, and in engineering settings such as oil production, ground stability, filtration and composites. Their physical and chemical behavior is governed by the distribution of liquid and interfaces between phases. Characterization of the interfacial distribution is mostly based on macroscopic experiments, aided by empirical formulae. We present an alternative computational paradigm utilizing a Monte Carlo algorithm to simulate interfaces in complex realistic pore geometries. The method agrees with analytical solutions available only for idealized pore geometries, and is in quantitative agreement with Micro X-ray Computed Tomography (microXCT), capillary pressure, and interfacial area measurements for natural soils. We demonstrate that this methodology predicts macroscopic properties such as the capillary pressure and air-liquid interface area versus liquid saturation based only on the pore size information from microXCT images and interfacial interaction energies. The generality of this method should allow simulation of capillarity in many porous materials.

  3. Treatment planning aspects and Monte Carlo methods in proton therapy

    NASA Astrophysics Data System (ADS)

    Fix, Michael K.; Manser, Peter

    2015-05-01

    Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.

  4. A Monte Carlo paradigm for capillarity in porous media

    NASA Astrophysics Data System (ADS)

    Lu, Ning; Zeidman, Benjamin D.; Lusk, Mark T.; Willson, Clinton S.; Wu, David T.

    2010-12-01

    Wet porous media are ubiquitous in nature as soils, rocks, plants, and bones, and in engineering settings such as oil production, ground stability, filtration and composites. Their physical and chemical behavior is governed by the distribution of liquid and interfaces between phases. Characterization of the interfacial distribution is mostly based on macroscopic experiments, aided by empirical formulae. We present an alternative computational paradigm utilizing a Monte Carlo algorithm to simulate interfaces in complex realistic pore geometries. The method agrees with analytical solutions available only for idealized pore geometries, and is in quantitative agreement with Micro X-ray Computed Tomography (microXCT), capillary pressure, and interfacial area measurements for natural soils. We demonstrate that this methodology predicts macroscopic properties such as the capillary pressure and air-liquid interface area versus liquid saturation based only on the pore size information from microXCT images and interfacial interaction energies. The generality of this method should allow simulation of capillarity in many porous materials.

  5. An efficient approach to ab initio Monte Carlo simulation

    SciTech Connect

    Leiding, Jeff; Coe, Joshua D.

    2014-01-21

    We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature β{sup 0}), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where—depending on the quality of the reference system potential—acceptance probabilities were enhanced by factors of 1.2–28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.

  6. The Monte Carlo code MCSHAPE: Main features and recent developments

    NASA Astrophysics Data System (ADS)

    Scot, Viviana; Fernandez, Jorge E.

    2015-06-01

    MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon-matter interactions in the energy range 1-1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data.

  7. Efficient, Automated Monte Carlo Methods for Radiation Transport.

    PubMed

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  8. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology. PMID:26012871

  9. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    DOE PAGES

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less

  10. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  11. Mesh-based weight window approach for Monte Carlo simulation

    SciTech Connect

    Liu, L.; Gardner, R.P.

    1997-12-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback.

  12. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.

  13. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  14. Bayesian Monte Carlo method for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  15. Nuclear pairing within a configuration-space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Lingle, Mark; Volya, Alexander

    2015-06-01

    Pairing correlations in nuclei play a decisive role in determining nuclear drip lines, binding energies, and many collective properties. In this work a new configuration-space Monte Carlo (CSMC) method for treating nuclear pairing correlations is developed, implemented, and demonstrated. In CSMC the Hamiltonian matrix is stochastically generated in Krylov subspace, resulting in the Monte Carlo version of Lanczos-like diagonalization. The advantages of this approach over other techniques are discussed; the absence of the fermionic sign problem, probabilistic interpretation of quantum-mechanical amplitudes, and ability to handle truly large-scale problems with defined precision and error control are noteworthy merits of CSMC. The features of our CSMC approach are shown using models and realistic examples. Special attention is given to difficult limits: situations with nonconstant pairing strengths, cases with nearly degenerate excited states, limits when pairing correlations in finite systems are weak, and problems when the relevant configuration space is large.

  16. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  17. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    NASA Astrophysics Data System (ADS)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  18. Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography

    SciTech Connect

    Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.

    2000-02-01

    The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.

  19. Monte Carlo Simulations on a 9-node PC Cluster

    NASA Astrophysics Data System (ADS)

    Gouriou, J.

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated.

  20. Analytical band Monte Carlo analysis of electron transport in silicene

    NASA Astrophysics Data System (ADS)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V-1 s-1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ˜100 cm2 V-1 s-1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  1. Monte Carlo Study of Real Time Dynamics on the Lattice

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  2. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  3. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  4. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  5. Minimising biases in full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step. PMID:25770522

  6. Quantum Monte Carlo calculations with chiral effective field theory interactions.

    PubMed

    Gezerlis, A; Tews, I; Epelbaum, E; Gandolfi, S; Hebeler, K; Nogga, A; Schwenk, A

    2013-07-19

    We present the first quantum Monte Carlo (QMC) calculations with chiral effective field theory (EFT) interactions. To achieve this, we remove all sources of nonlocality, which hamper the inclusion in QMC calculations, in nuclear forces to next-to-next-to-leading order. We perform auxiliary-field diffusion Monte Carlo (AFDMC) calculations for the neutron matter energy up to saturation density based on local leading-order, next-to-leading order, and next-to-next-to-leading order nucleon-nucleon interactions. Our results exhibit a systematic order-by-order convergence in chiral EFT and provide nonperturbative benchmarks with theoretical uncertainties. For the softer interactions, perturbative calculations are in excellent agreement with the AFDMC results. This work paves the way for QMC calculations with systematic chiral EFT interactions for nuclei and nuclear matter, for testing the perturbativeness of different orders, and allows for matching to lattice QCD results by varying the pion mass.

  7. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  8. Five dimensional binary hard hypersphere mixtures: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Bishop, Marvin; Whitlock, Paula A.

    2016-10-01

    Additive binary mixtures of five dimensional hyperspheres were investigated by Monte Carlo simulations. Both equal packing fraction and equal mole fraction systems with diameter ratios of 0.4 and 0.5 were examined. A range of total densities were studied, spanning low to moderate density fluids. The pair correlation functions and the equations of state were determined and compared with molecular dynamics data and a variety of theoretical predictions. A significant result of the equal packing fraction simulations was the discovery of how quickly the larger hyperspheres reorganized into a dense fluid after a random initial placement. In the equal mole fraction case, the pair correlation functions for the larger hypersphere agree with the pair correlation function of a pure fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.

  9. Multilayer adsorption by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Molina-Mateo, J.; Salmerón Sánchez, M.; Monleón Pradas, M.; Torregrosa Cabanilles, C.

    2012-10-01

    Adsorption phenomena are characterized by models that include free parameters trying to reproduce experimental results. In order to understand the relationship between the model parameters and the material properties, the adsorption of small molecules on a crystalline plane surface has been simulated using the bond fluctuation model. A direct comparison between the Guggenheim-Anderson-de Boer (GAB) model for multilayer adsorption and computer simulations allowed us to establish correlations between the adsorption model parameters and the simulated interaction potentials.

  10. Markov chain Monte Carlo posterior sampling with the Hamiltonian method.

    SciTech Connect

    Hanson, Kenneth M.

    2001-01-01

    A major advantage of Bayesian data analysis is that provides a characterization of the uncertainty in the model parameters estimated from a given set of measurements in the form of a posterior probability distribution. When the analysis involves a complicated physical phenomenon, the posterior may not be available in analytic form, but only calculable by means of a simulation code. In such cases, the uncertainty in inferred model parameters requires characterization of a calculated functional. An appealing way to explore the posterior, and hence characterize the uncertainty, is to employ the Markov Chain Monte Carlo technique. The goal of MCMC is to generate a sequence random of parameter x samples from a target pdf (probability density function), {pi}(x). In Bayesian analysis, this sequence corresponds to a set of model realizations that follow the posterior distribution. There are two basic MCMC techniques. In Gibbs sampling, typically one parameter is drawn from the conditional pdf at a time, holding all others fixed. In the Metropolis algorithm, all the parameters can be varied at once. The parameter vector is perturbed from the current sequence point by adding a trial step drawn randomly from a symmetric pdf. The trial position is either accepted or rejected on the basis of the probability at the trial position relative to the current one. The Metropolis algorithm is often employed because of its simplicity. The aim of this work is to develop MCMC methods that are useful for large numbers of parameters, n, say hundreds or more. In this regime the Metropolis algorithm can be unsuitable, because its efficiency drops as 0.3/n. The efficiency is defined as the reciprocal of the number of steps in the sequence needed to effectively provide a statistically independent sample from {pi}.

  11. Recent advances in the Mercury Monte Carlo particle transport code

    SciTech Connect

    Brantley, P. S.; Dawson, S. A.; McKinley, M. S.; O'Brien, M. J.; Stevens, D. E.; Beck, B. R.; Jurgenson, E. D.; Ebbers, C. A.; Hall, J. M.

    2013-07-01

    We review recent physics and computational science advances in the Mercury Monte Carlo particle transport code under development at Lawrence Livermore National Laboratory. We describe recent efforts to enable a nuclear resonance fluorescence capability in the Mercury photon transport. We also describe recent work to implement a probability of extinction capability into Mercury. We review the results of current parallel scaling and threading efforts that enable the code to run on millions of MPI processes. (authors)

  12. Quantum Monte Carlo Calculations of Symmetric Nuclear Matter

    SciTech Connect

    Gandolfi, Stefano; Pederiva, Francesco; Fantoni, Stefano; Schmidt, Kevin E.

    2007-03-09

    We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon-nucleon interactions by means of auxiliary field diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and represents an important step forward towards a quantitative understanding of problems in nuclear structure and astrophysics.

  13. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.

    PubMed

    Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L

    2003-02-01

    Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney. PMID:12667310

  14. Monte Carlo calculation of patient organ doses from computed tomography.

    PubMed

    Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi

    2014-01-01

    In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.

  15. Quantum Monte Carlo calculations of symmetric nuclear matter.

    PubMed

    Gandolfi, Stefano; Pederiva, Francesco; Fantoni, Stefano; Schmidt, Kevin E

    2007-03-01

    We present an accurate numerical study of the equation of state of nuclear matter based on realistic nucleon-nucleon interactions by means of auxiliary field diffusion Monte Carlo (AFDMC) calculations. The AFDMC method samples the spin and isospin degrees of freedom allowing for quantum simulations of large nucleonic systems and represents an important step forward towards a quantitative understanding of problems in nuclear structure and astrophysics.

  16. A multicomb variance reduction scheme for Monte Carlo semiconductor simulators

    SciTech Connect

    Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.

    1998-04-01

    The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.

  17. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  18. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    SciTech Connect

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  19. Monte Carlo Simulation of Heavy Nuclei Photofission at Intermediate Energies

    SciTech Connect

    Andrade-II, E.; Freitas, E.; Garcia, F.; Tavares, O. A. P.; Duarte, S. B.

    2009-06-03

    A detailed description of photofission process at intermediate energies (200 to 1000 MeV) is presented. The study of the reaction is performed by a Monte Carlo method which allows the investigation of properties of residual nuclei and fissioning nuclei. The information obtained indicate that multifragmentation is negligible at the photon energies studied here, and that the symmetrical fission is dominant. Energy and mass distributions of residual and fissioning nuclei were calculated.

  20. Representation and simulation for pyrochlore lattice via Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Passos, André Luis; de Albuquerque, Douglas F.; Filho, João Batista Santos

    2016-05-01

    This work presents a representation of the Kagome and pyrochlore lattices using Monte Carlo simulation as well as some results of the critical properties. These lattices are composed corner sharing triangles and tetrahedrons respectively. The simulation was performed employing the Cluster Wolf Algorithm for the spin updates through the standard ferromagnetic Ising Model. The determination of the critical temperature and exponents was based on the Histogram Technique and the Finite-Size Scaling Theory.

  1. Monte Carlo approach to nuclei and nuclear matter

    SciTech Connect

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-13

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.

  2. Monte Carlo calculations for r-process nucleosynthesis

    SciTech Connect

    Mumpower, Matthew Ryan

    2015-11-12

    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  3. Monte Carlo verification of gel dosimetry measurements for stereotactic radiotherapy

    NASA Astrophysics Data System (ADS)

    Kairn, T.; Taylor, M. L.; Crowe, S. B.; Dunn, L.; Franich, R. D.; Kenny, J.; Knight, R. T.; Trapp, J. V.

    2012-06-01

    The quality assurance of stereotactic radiotherapy and radiosurgery treatments requires the use of small-field dose measurements that can be experimentally challenging. This study used Monte Carlo simulations to establish that PAGAT dosimetry gel can be used to provide accurate, high-resolution, three-dimensional dose measurements of stereotactic radiotherapy fields. A small cylindrical container (4 cm height, 4.2 cm diameter) was filled with PAGAT gel, placed in the parietal region inside a CIRS head phantom and irradiated with a 12-field stereotactic radiotherapy plan. The resulting three-dimensional dose measurement was read out using an optical CT scanner and compared with the treatment planning prediction of the dose delivered to the gel during the treatment. A BEAMnrc/DOSXYZnrc simulation of this treatment was completed, to provide a standard against which the accuracy of the gel measurement could be gauged. The three-dimensional dose distributions obtained from Monte Carlo and from the gel measurement were found to be in better agreement with each other than with the dose distribution provided by the treatment planning system's pencil beam calculation. Both sets of data showed close agreement with the treatment planning system's dose distribution through the centre of the irradiated volume and substantial disagreement with the treatment planning system at the penumbrae. The Monte Carlo calculations and gel measurements both indicated that the treated volume was up to 3 mm narrower, with steeper penumbrae and more variable out-of-field dose, than predicted by the treatment planning system. The Monte Carlo simulations allowed the accuracy of the PAGAT gel dosimeter to be verified in this case, allowing PAGAT gel to be utilized in the measurement of dose from stereotactic and other radiotherapy treatments, with greater confidence in the future. Experimental aspects of this work were originally presented at the Engineering and Physical Sciences in Medicine

  4. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  5. Valence-bond quantum Monte Carlo algorithms defined on trees.

    PubMed

    Deschner, Andreas; Sørensen, Erik S

    2014-09-01

    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update. PMID:25314561

  6. Geometric Templates for Improved Tracking Performance in Monte Carlo Codes

    NASA Astrophysics Data System (ADS)

    Nease, Brian R.; Millman, David L.; Griesheimer, David P.; Gill, Daniel F.

    2014-06-01

    One of the most fundamental parts of a Monte Carlo code is its geometry kernel. This kernel not only affects particle tracking (i.e., run-time performance), but also shapes how users will input models and collect results for later analyses. A new framework based on geometric templates is proposed that optimizes performance (in terms of tracking speed and memory usage) and simplifies user input for large scale models. While some aspects of this approach currently exist in different Monte Carlo codes, the optimization aspect has not been investigated or applied. If Monte Carlo codes are to be realistically used for full core analysis and design, this type of optimization will be necessary. This paper describes the new approach and the implementation of two template types in MC21: a repeated ellipse template and a box template. Several different models are tested to highlight the performance gains that can be achieved using these templates. Though the exact gains are naturally problem dependent, results show that runtime and memory usage can be significantly reduced when using templates, even as problems reach realistic model sizes.

  7. A semianalytic Monte Carlo code for modelling LIDAR measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  8. A Wigner Monte Carlo approach to density functional theory

    SciTech Connect

    Sellier, J.M. Dimov, I.

    2014-08-01

    In order to simulate quantum N-body systems, stationary and time-dependent density functional theories rely on the capacity of calculating the single-electron wave-functions of a system from which one obtains the total electron density (Kohn–Sham systems). In this paper, we introduce the use of the Wigner Monte Carlo method in ab-initio calculations. This approach allows time-dependent simulations of chemical systems in the presence of reflective and absorbing boundary conditions. It also enables an intuitive comprehension of chemical systems in terms of the Wigner formalism based on the concept of phase-space. Finally, being based on a Monte Carlo method, it scales very well on parallel machines paving the way towards the time-dependent simulation of very complex molecules. A validation is performed by studying the electron distribution of three different systems, a Lithium atom, a Boron atom and a hydrogenic molecule. For the sake of simplicity, we start from initial conditions not too far from equilibrium and show that the systems reach a stationary regime, as expected (despite no restriction is imposed in the choice of the initial conditions). We also show a good agreement with the standard density functional theory for the hydrogenic molecule. These results demonstrate that the combination of the Wigner Monte Carlo method and Kohn–Sham systems provides a reliable computational tool which could, eventually, be applied to more sophisticated problems.

  9. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods. PMID:26374029

  10. Accelerating Monte Carlo power studies through parametric power estimation.

    PubMed

    Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C

    2016-04-01

    Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.

  11. MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS

    SciTech Connect

    Roth, Nathaniel; Kasen, Daniel

    2015-03-15

    We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.

  12. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  13. Performance of quantum Monte Carlo for calculating molecular bond lengths

    NASA Astrophysics Data System (ADS)

    Cleland, Deidre M.; Per, Manolo C.

    2016-03-01

    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10-3 Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10-3 Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.

  14. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  15. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  16. Valence-bond quantum Monte Carlo algorithms defined on trees.

    PubMed

    Deschner, Andreas; Sørensen, Erik S

    2014-09-01

    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.

  17. Monte Carlo modelling of positron transport in real world applications

    NASA Astrophysics Data System (ADS)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  18. Chemical accuracy from quantum Monte Carlo for the benzene dimer

    SciTech Connect

    Azadi, Sam; Cohen, R. E.

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  19. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  20. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    NASA Astrophysics Data System (ADS)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  1. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  2. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  3. Monte Carlo studies for medical imaging detector optimization

    NASA Astrophysics Data System (ADS)

    Fois, G. R.; Cisbani, E.; Garibaldi, F.

    2016-02-01

    This work reports on the Monte Carlo optimization studies of detection systems for Molecular Breast Imaging with radionuclides and Bremsstrahlung Imaging in nuclear medicine. Molecular Breast Imaging requires competing performances of the detectors: high efficiency and high spatial resolutions; in this direction, it has been proposed an innovative device which combines images from two different, and somehow complementary, detectors at the opposite sides of the breast. The dual detector design allows for spot compression and improves significantly the performance of the overall system if all components are well tuned, layout and processing carefully optimized; in this direction the Monte Carlo simulation represents a valuable tools. In recent years, Bremsstrahlung Imaging potentiality in internal radiotherapy (with beta-radiopharmaceuticals) has been clearly emerged; Bremsstrahlung Imaging is currently performed with existing detector generally used for single photon radioisotopes. We are evaluating the possibility to adapt an existing compact gamma camera and optimize by Monte Carlo its performance for Bremsstrahlung imaging with photons emitted by the beta- from 90 Y.

  4. ALEPH2 - A general purpose Monte Carlo depletion code

    SciTech Connect

    Stankovskiy, A.; Van Den Eynde, G.; Baeten, P.; Trakas, C.; Demy, P. M.; Villatte, L.

    2012-07-01

    The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)

  5. Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials

    SciTech Connect

    Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; Osetskiy, Yury N.

    2015-01-29

    The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies is illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.

  6. Self-evolving atomistic kinetic Monte Carlo simulations of defects in materials

    DOE PAGES

    Xu, Haixuan; Beland, Laurent K.; Stoller, Roger E.; Osetskiy, Yury N.

    2015-01-29

    The recent development of on-the-fly atomistic kinetic Monte Carlo methods has led to an increased amount attention on the methods and their corresponding capabilities and applications. In this review, the framework and current status of Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC) are discussed. SEAKMC particularly focuses on defect interaction and evolution with atomistic details without assuming potential defect migration/interaction mechanisms and energies. The strength and limitation of using an active volume, the key concept introduced in SEAKMC, are discussed. Potential criteria for characterizing an active volume are discussed and the influence of active volume size on saddle point energies ismore » illustrated. A procedure starting with a small active volume followed by larger active volumes was found to possess higher efficiency. Applications of SEAKMC, ranging from point defect diffusion, to complex interstitial cluster evolution, to helium interaction with tungsten surfaces, are summarized. A comparison of SEAKMC with molecular dynamics and conventional object kinetic Monte Carlo is demonstrated. Overall, SEAKMC is found to be complimentary to conventional molecular dynamics, especially when the harmonic approximation of transition state theory is accurate. However it is capable of reaching longer time scales than molecular dynamics and it can be used to systematically increase the accuracy of other methods such as object kinetic Monte Carlo. Furthermore, the challenges and potential development directions are also outlined.« less

  7. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  8. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  9. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    SciTech Connect

    Di Salvio, A.; Bedwani, S.; Carrier, J-F.; Bouchard, H.

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.

  10. An automated variance reduction method for global Monte Carlo neutral particle transport problems

    NASA Astrophysics Data System (ADS)

    Cooper, Marc Andrew

    A method to automatically reduce the variance in global neutral particle Monte Carlo problems by using a weight window derived from a deterministic forward solution is presented. This method reduces a global measure of the variance of desired tallies and increases its associated figure of merit. Global deep penetration neutron transport problems present difficulties for analog Monte Carlo. When the scalar flux decreases by many orders of magnitude, so does the number of Monte Carlo particles. This can result in large statistical errors. In conjunction with survival biasing, a weight window is employed which uses splitting and Russian roulette to restrict the symbolic weights of Monte Carlo particles. By establishing a connection between the scalar flux and the weight window, two important concepts are demonstrated. First, such a weight window can be constructed from a deterministic solution of a forward transport problem. Also, the weight window will distribute Monte Carlo particles in such a way to minimize a measure of the global variance. For Implicit Monte Carlo solutions of radiative transfer problems, an inefficient distribution of Monte Carlo particles can result in large statistical errors in front of the Marshak wave and at its leading edge. Again, the global Monte Carlo method is used, which employs a time-dependent weight window derived from a forward deterministic solution. Here, the algorithm is modified to enhance the number of Monte Carlo particles in the wavefront. Simulations show that use of this time-dependent weight window significantly improves the Monte Carlo calculation.

  11. Accelerating Monte Carlo Markov chains with proxy and error models

    NASA Astrophysics Data System (ADS)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  12. Experimental and Monte Carlo evaluation of an ionization chamber in a 60Co beam

    NASA Astrophysics Data System (ADS)

    Perini, A. P.; Neves, L. P.; Santos, W. S.; Caldas, L. V. E.

    2016-07-01

    Recently a special parallel-plate ionization chamber was developed and characterized at the Instituto de Pesquisas Energeticas e Nucleares. The operational tests presented results within the recommended limits. In order to determine the influence of some components of the ionization chamber on its response, Monte Carlo simulations were carried out. The experimental and simulation results pointed out that the dosimeter evaluated in the present work has favorable properties to be applied to 60Co dosimetry at calibration laboratories.

  13. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  14. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    SciTech Connect

    Bernnat, W.; Buck, M.; Mattes, M.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2012-07-01

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code or memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)

  15. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  16. Latent uncertainties of the precalculated track Monte Carlo method

    SciTech Connect

    Renaud, Marc-André; Seuntjens, Jan; Roberge, David

    2015-01-15

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of

  17. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  18. MCNP{trademark} Monte Carlo: A precis of MCNP

    SciTech Connect

    Adams, K.J.

    1996-06-01

    MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

  19. Polarimetric lidar returns in the ocean: a Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Adams, James T.; Kattawar, George W.

    1997-02-01

    Anisotropy in the polarization state of backscattered light from a polarized beam incident on suspensions in water analogous to hydrosols in seawater has been observed experimentally. Viewed through an orientated polarizer, characteristic patterns in the backscattered light are produced. We wish to present the results of Monte Carlo simulations of these physical effects demonstrating excellent agreement with published and unpublished experimental observations. These simulations show that the effects observed are produced by the incoherent scattering of light in the range of volume fractions reported and that this treatment should a low predictions to be made about the application of this technique to ocean probing lidar.

  20. Monte Carlo simulation experiments on box-type radon dosimeter

    NASA Astrophysics Data System (ADS)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-01

    Epidemiological studies show that inhalation of radon gas (222Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222Rn concentrations (Bq/m3) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter's dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (ηint) and alpha hit efficiency (ηhit). The ηint depends upon only on the dimensions of the dosimeter and ηhit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon concentration from the

  1. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  2. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  3. Communication: Variation after response in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Neuscamman, Eric

    2016-08-01

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing coupled cluster's accuracy for excitations with significant doubly excited character.

  4. Growing lattice animals and Monte-Carlo methods

    NASA Astrophysics Data System (ADS)

    Reich, G. R.; Leath, P. L.

    1980-01-01

    We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals

  5. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  6. A Post-Monte-Carlo Sensitivity Analysis Code

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  7. Quantum Monte Carlo Simulation of Overpressurized Liquid {sup 4}He

    SciTech Connect

    Vranjes, L.; Boronat, J.; Casulleras, J.; Cazorla, C.

    2005-09-30

    A diffusion Monte Carlo simulation of superfluid {sup 4}He at zero temperature and pressures up to 275 bar is presented. Increasing the pressure beyond freezing ({approx}25 bar), the liquid enters the overpressurized phase in a metastable state. In this regime, we report results of the equation of state and the pressure dependence of the static structure factor, the condensate fraction, and the excited-state energy corresponding to the roton. Along this large pressure range, both the condensate fraction and the roton energy decrease but do not become zero. The roton energies obtained are compared with recent experimental data in the overpressurized regime.

  8. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    SciTech Connect

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizations of subsurface flow problems.

  9. Element Agglomeration Algebraic Multilevel Monte-Carlo Library

    2015-02-19

    ElagMC is a parallel C++ library for Multilevel Monte Carlo simulations with algebraically constructed coarse spaces. ElagMC enables Multilevel variance reduction techniques in the context of general unstructured meshes by using the specialized element-based agglomeration techniques implemented in ELAG (the Element-Agglomeration Algebraic Multigrid and Upscaling Library developed by U. Villa and P. Vassilevski and currently under review for public release). The ElabMC library can support different type of deterministic problems, including mixed finite element discretizationsmore » of subsurface flow problems.« less

  10. Monte Carlo simulation of retinal light absorption by infants.

    PubMed

    Guo, Ya; Tan, Jinglu

    2015-02-01

    Retinal damage can occur in normal ambient lighting conditions. Infants are particularly vulnerable to retinal damage, and thousands of preterm infants sustain vision damage each year. The size of the ocular fundus affects retinal light absorption, but there is a lack of understanding of this effect for infants. In this work, retinal light absorption is simulated for different ocular fundus sizes, wavelengths, and pigment concentrations by using the Monte Carlo method. The results indicate that the neural retina light absorption per volume for infants can be two or more times that for adults. PMID:26366599

  11. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Al-Hamdani, Yasmine S.; Ma, Ming; Alfè, Dario; von Lilienfeld, O. Anatole; Michaelides, Angelos

    2015-05-01

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of -84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  12. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo.

    PubMed

    Al-Hamdani, Yasmine S; Ma, Ming; Alfè, Dario; von Lilienfeld, O Anatole; Michaelides, Angelos

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of -84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT. PMID:25978876

  13. Communication: Variation after response in quantum Monte Carlo.

    PubMed

    Neuscamman, Eric

    2016-08-28

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing coupled cluster's accuracy for excitations with significant doubly excited character.

  14. Monte Carlo simulation of the Neutrino-4 experiment

    SciTech Connect

    Serebrov, A. P. Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N.

    2015-12-15

    Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.

  15. Communication: Variation after response in quantum Monte Carlo.

    PubMed

    Neuscamman, Eric

    2016-08-28

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing coupled cluster's accuracy for excitations with significant doubly excited character. PMID:27586897

  16. Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo

    SciTech Connect

    Booth, T.E.

    1998-06-22

    It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution.

  17. Morphological evolution of growing crystals - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz

    1988-01-01

    The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.

  18. Improved diffusion Monte Carlo and the Brownian fan

    NASA Astrophysics Data System (ADS)

    Weare, J.; Hairer, M.

    2012-12-01

    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown

  19. Off-Lattice Monte Carlo Simulation of Supramolecular Polymer Architectures

    NASA Astrophysics Data System (ADS)

    Amuasi, H. E.; Storm, C.

    2010-12-01

    We introduce an efficient, scalable Monte Carlo algorithm to simulate cross-linked architectures of freely jointed and discrete wormlike chains. Bond movement is based on the discrete tractrix construction, which effects conformational changes that exactly preserve fixed-length constraints of all bonds. The algorithm reproduces known end-to-end distance distributions for simple, analytically tractable systems of cross-linked stiff and freely jointed polymers flawlessly, and is used to determine the effective persistence length of short bundles of semiflexible wormlike chains, cross-linked to each other. It reveals a possible regulatory mechanism in bundled networks: the effective persistence of bundles is controlled by the linker density.

  20. Experimental validation of plutonium ageing by Monte Carlo correlated sampling

    SciTech Connect

    Litaize, O.; Bernard, D.; Santamarina, A.

    2006-07-01

    Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)

  1. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  2. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  3. Neutronic calculations for CANDU thorium systems using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Saldideh, M.; Shayesteh, M.; Eshghi, M.

    2014-08-01

    In this paper, we have investigated the prospects of exploiting the rich world thorium reserves using Canada Deuterium Uranium (CANDU) reactors. The analysis is performed using the Monte Carlo MCNP code in order to understand how much time the reactor is in criticality conduction. Four different fuel compositions have been selected for analysis. We have obtained the infinite multiplication factor, k∞, under full power operation of the reactor over 8 years. The neutronic flux distribution in the full core reactor has already been investigated.

  4. Monte Carlo simulation of retinal light absorption by infants.

    PubMed

    Guo, Ya; Tan, Jinglu

    2015-02-01

    Retinal damage can occur in normal ambient lighting conditions. Infants are particularly vulnerable to retinal damage, and thousands of preterm infants sustain vision damage each year. The size of the ocular fundus affects retinal light absorption, but there is a lack of understanding of this effect for infants. In this work, retinal light absorption is simulated for different ocular fundus sizes, wavelengths, and pigment concentrations by using the Monte Carlo method. The results indicate that the neural retina light absorption per volume for infants can be two or more times that for adults.

  5. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  6. PREFACE: First European Workshop on Monte Carlo Treatment Planning

    NASA Astrophysics Data System (ADS)

    Reynaert, Nick

    2007-07-01

    The "First European Workshop on Monte Carlo treatment planning", was an initiative of the European working group on Monte Carlo treatment planning (EWG-MCTP). It was organised at Ghent University (Belgium) on 22-25October 2006. The meeting was very successful and was attended by 150 participants. The impressive list of invited speakers and the scientific contributions (posters and oral presentations) have led to a very interesting program, that was well appreciated by all attendants. In addition, the presence of seven vendors of commercial MCTP software systems provided serious added value to the workshop. For each vendor, a representative has given a presentation in a dedicated session, explaining the current status of their system. It is clear that, for "traditional" radiotherapy applications (using photon or electron beams), Monte Carlo dose calculations have become the state of the art, and are being introduced into almost all commercial treatment planning systems. Invited lectures illustrated that scientific challenges are currently associated with 4D applications (e.g. respiratory motion) and the introduction of MC dose calculations in inverse planning. But it was striking that the Monte Carlo technique is also becoming very important in more novel treatment modalities such as BNCT, hadron therapy, stereotactic radiosurgery, Tomotherapy, etc. This emphasizes the continuous growing interest in MCTP. The people who attended the dosimetry session will certainly remember the high level discussion on the determination of correction factors for different ion chambers, used in small fields. The following proceedings will certainly confirm the high scientific level of the meeting. I would like to thank the members of the local organizing committee for all the hard work done before, during and after this meeting. The organisation of such an event is not a trivial task and it would not have been possible without the help of all my colleagues. I would also like to thank

  7. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    SciTech Connect

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  8. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  9. Importance-Sampling Monte Carlo Approach to Classical Spin Systems

    NASA Astrophysics Data System (ADS)

    Huang, Hsing-Mei

    A new approach for carrying out static Monte Carlo calculations of thermodynamic quantities for classical spin systems is proposed. Combining the ideas of coincidence countings and importance samplings, we formulate a scheme for obtaining Γ(E), the number of states for a fixed energy E, and use Γ(E) to compute thermodynamic properties. Using the Ising model as an example, we demonstrate that our procedure leads to accurate numerical results without excessive use of computer time. We also show that the procedure is easily extended to obtaining magnetic properties of the Ising model.

  10. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  11. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  12. Optical Monte Carlo modeling of a true portwine stain anatomy

    NASA Astrophysics Data System (ADS)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  13. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems

    SciTech Connect

    Gentile, N

    2007-08-01

    Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.

  14. Krylov-Projected Quantum Monte Carlo Method.

    PubMed

    Blunt, N S; Alavi, Ali; Booth, George H

    2015-07-31

    We present an approach to the calculation of arbitrary spectral, thermal, and excited state properties within the full configuration interaction quzantum Monte Carlo framework. This is achieved via an unbiased projection of the Hamiltonian eigenvalue problem into a space of stochastically sampled Krylov vectors, thus, enabling the calculation of real-frequency spectral and thermal properties and avoiding explicit analytic continuation. We use this approach to calculate temperature-dependent properties and one- and two-body spectral functions for various Hubbard models, as well as isolated excited states in ab initio systems. PMID:26274406

  15. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  16. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  17. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  18. Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

    SciTech Connect

    Kiedrowski, Brian C.; Brown, Forrest B.

    2012-06-18

    An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

  19. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy

    2015-09-01

    Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose

  20. Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation

    SciTech Connect

    Dirgayussa, I Gde Eka Yani, Sitti; Haryanto, Freddy; Rhani, M. Fahdillah

    2015-09-30

    Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good

  1. Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.

    PubMed

    Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O

    2015-04-01

    Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.

  2. Parallel domain decomposition methods in fluid models with Monte Carlo transport

    SciTech Connect

    Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.

    1996-12-01

    To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.

  3. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  4. Diffusion quantum Monte Carlo study of martensitic phase transition energetics: The case of phosphorene

    NASA Astrophysics Data System (ADS)

    Reeves, Kyle G.; Yao, Yi; Kanai, Yosuke

    2016-09-01

    Recent technical advances in dealing with finite-size errors make quantum Monte Carlo methods quite appealing for treating extended systems in electronic structure calculations, especially when commonly used density functional theory (DFT) methods might not be satisfactory. We present a theoretical study of martensitic phase transition energetics of a two-dimensional phosphorene by employing diffusion Monte Carlo (DMC) approach. The DMC calculation supports DFT prediction of having a rather diffusive barrier that is characterized by having two transition states, in addition to confirming that the so-called black and blue phases of phosphorene are essentially degenerate. At the same time, the DFT calculations do not provide the quantitative accuracy in describing the energy changes for the martensitic phase transition even when hybrid exchange-correlation functional is employed. We also discuss how mechanical strain influences the stabilities of the two phases of phosphorene.

  5. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  6. Monte Carlo studies of diamagnetism and charge density wave order in the cuprate pseudogap regime

    NASA Astrophysics Data System (ADS)

    Hayward Sierens, Lauren; Achkar, Andrew; Hawthorn, David; Melko, Roger; Sachdev, Subir

    2015-03-01

    The pseudogap regime of the hole-doped cuprate superconductors is often characterized experimentally in terms of a substantial diamagnetic response and, from another point of view, in terms of strong charge density wave (CDW) order. We introduce a dimensionless ratio, R, that incorporates both diamagnetic susceptibility and the correlation length of CDW order, and therefore reconciles these two fundamental characteristics of the pseudogap. We perform Monte Carlo simulations on a classical model that considers angular fluctuations of a six-dimensional order parameter, and compare our Monte Carlo results for R with existing data from torque magnetometry and x-ray scattering experiments on YBa2Cu3O6+x. We achieve qualitative agreement, and also propose future experiments to further investigate the behaviour of this dimensionless ratio.

  7. Improved Full Configuration Interaction Monte Carlo for the Hubbard Model

    NASA Astrophysics Data System (ADS)

    Changlani, Hitesh; Holmes, Adam; Petruzielo, Frank; Chan, Garnet; Henley, C. L.; Umrigar, C. J.

    2012-02-01

    We consider the recently proposed full configuration interaction quantum Monte Carlo (FCI-QMC) method and its ``initiator'' extension, both of which promise to ameliorate the sign problem by utilizing the cancellation of positive and negative walkers in the Hilbert space of Slater determinants. While the method has been primarily used for quantum chemistry by A.Alavi and his co-workers [1,2], its application to lattice models in solid state physics has not been tested. We propose an improvement in the form of choosing a basis to make the wavefunction more localized in Fock space, which potentially also reduces the sign problem. We perform calculations on the 4x4 and 8x8 Hubbard models in real and momentum space and in a basis motivated by the reduced density matrix of a 2x2 real space patch obtained from the exact diagonalization of a larger system in which it is embedded. We discuss our results for a range of fillings and U/t and compare them with previous Auxiliary Field QMC and Fixed Node Green's Function Monte Carlo calculations. [4pt] [1] George Booth, Alex Thom, Ali Alavi, J Chem Phys, 131, 050106,(2009)[0pt] [2] D Cleland, GH Booth, Ali Alavi, J Chem Phys 132, 041103, (2010)

  8. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  9. Longitudinal functional principal component modeling via Stochastic Approximation Monte Carlo

    PubMed Central

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented. PMID:20689648

  10. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  11. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.

  12. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  13. Path integral hybrid Monte Carlo algorithm for correlated Bose fluids.

    PubMed

    Miura, Shinichi; Tanaka, Junji

    2004-02-01

    Path integral hybrid Monte Carlo (PIHMC) algorithm for strongly correlated Bose fluids has been developed. This is an extended version of our previous method [S. Miura and S. Okazaki, Chem. Phys. Lett. 308, 115 (1999)] applied to a model system consisting of noninteracting bosons. Our PIHMC method for the correlated Bose fluids is constituted of two trial moves to sample path-variables describing system coordinates along imaginary time and a permutation of particle labels giving a boundary condition with respect to imaginary time. The path-variables for a given permutation are generated by a hybrid Monte Carlo method based on path integral molecular dynamics techniques. Equations of motion for the path-variables are formulated on the basis of a collective coordinate representation of the path, staging variables, to enhance the sampling efficiency. The permutation sampling to satisfy Bose-Einstein statistics is performed using the multilevel Metropolis method developed by Ceperley and Pollock [Phys. Rev. Lett. 56, 351 (1986)]. Our PIHMC method has successfully been applied to liquid helium-4 at a state point where the system is in a superfluid phase. Parameters determining the sampling efficiency are optimized in such a way that correlation among successive PIHMC steps is minimized. PMID:15268354

  14. Monte Carlo source convergence and the Whitesides problem

    SciTech Connect

    Blomquist, R. N.

    2000-02-25

    The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems.

  15. Application of Monte Carlo codes to neutron dosimetry

    SciTech Connect

    Prevo, C.T.

    1982-06-15

    In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique.

  16. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.

    SciTech Connect

    Mohamed, A.

    1998-07-10

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.

  17. Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy

    PubMed Central

    Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.

    2015-01-01

    Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644

  18. Quantum Monte Carlo of ThO2

    NASA Astrophysics Data System (ADS)

    Hu, Shuming; Mitas, Lubos

    2012-02-01

    Thorium dioxide solid is a unique optical and heat-resistant actinide material with large gap and cohesion. It is a diamagnet, unlike a number of other similar actinide oxides. We investigate the electronic structure of ThO2 using Density Functional Theory (DFT) and quantum Monte Carlo (QMC) methods. We adopt Stuttgart RLC and RSC effective core potentials (pseudopotentials) for the Th atom. In the DFT calculations, some of the properties are verified in all-electron calculations using the FLAPW techniques. Using the fixed-node diffusion Monte Carlo we calculate the ground state and several excited states from which we estimate the cohesion and the band gap. Simulation cells of several sizes are used to estimate/reduce the finite size effects. We compare the QMC results with recent DFT calculations with several types of functionals which include hybrids such as PBE0 and HSE. Insights from QMC calculations give us understanding of the correlations beyond the DFT approaches and pave the way for accurate electronic structure calculations of other actinide materials.

  19. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  20. Dynamical Monte Carlo methods for plasma-surface reactions

    NASA Astrophysics Data System (ADS)

    Guerra, Vasco; Marinov, Daniil

    2016-08-01

    Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.

  1. Monte Carlo QSAR models for predicting organophosphate inhibition of acetycholinesterase.

    PubMed

    Veselinović, J B; Nikolić, G M; Trutić, N V; Živković, J V; Veselinović, A M

    2015-06-01

    A series of 278 organophosphate compounds acting as acetylcholinesterase inhibitors has been studied. The Monte Carlo method was used as a tool for building up one-variable quantitative structure-activity relationship (QSAR) models for acetylcholinesterase inhibition activity based on the principle that the target endpoint is treated as a random event. As an activity, bimolecular rate constants were used. The QSAR models were based on optimal descriptors obtained from Simplified Molecular Input-Line Entry System (SMILES) used for the representation of molecular structure. Two modelling approaches were examined: (1) 'classic' training-test system where the QSAR model was built with one random split into a training, test and validation set; and (2) the correlation balance based QSAR models were built with two random splits into a sub-training, calibration, test and validation set. The DModX method was used for defining the applicability domain. The obtained results suggest that studied activity can be determined with the application of QSAR models calculated with the Monte Carlo method since the statistical quality of all build models was very good. Finally, structural indicators for the increase and the decrease of the bimolecular rate constant are defined. The possibility of using these results for the computer-aided design of new organophosphate compounds is presented.

  2. On the time scale associated with Monte Carlo simulations

    SciTech Connect

    Bal, Kristof M. Neyts, Erik C.

    2014-11-28

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  3. A pure-sampling quantum Monte Carlo algorithm

    SciTech Connect

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  4. Simulating oblique incident irradiation using the BEAMnrc Monte Carlo code.

    PubMed

    Downes, P; Spezi, E

    2009-04-01

    A new source for the simulation of oblique incident irradiation has been developed for the BEAMnrc Monte Carlo code. In this work, we describe a method for the simulation of any component that is rotated at some angle relative to the central axis of the modelled radiation unit. The performance of the new BEAMnrc source was validated against experimental measurements. The comparison with ion chamber data showed very good agreement between experiments and calculation for a number of oblique irradiation angles ranging from 0 degrees to 30 degrees . The routine was also cross-validated, in geometrically equivalent conditions, against a different radiation source available in the DOSXYZnrc code. The test showed excellent consistency between the two routines. The new radiation source can be particularly useful for the Monte Carlo simulation of radiation units in which the radiation beam is tilted with respect to the unit's central axis. To highlight this, a modern cone-beam CT unit is modelled using this new source and validated against measurement.

  5. Condensed history Monte Carlo methods for photon transport problems

    PubMed Central

    Bhan, Katherine; Spanier, Jerome

    2007-01-01

    We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods – called Condensed History (CH) methods – have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models – one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes – can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions. PMID:18548128

  6. Improved criticality convergence via a modified Monte Carlo iteration method

    SciTech Connect

    Booth, Thomas E; Gubernatis, James E

    2009-01-01

    Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

  7. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  8. CMS Monte Carlo production in the WLCG computing grid

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Kreuzer, P.; Mohapatra, A.; Filippis, N. D.; Weirdt, S. D.; Hof, C.; Wakefield, S.; Guan, W.; Khomitch, A.; Fanfani, A.; Evans, D.; Flossdorf, A.; Maes, J.; Mulders, P. v.; Villella, I.; Pompili, A.; My, S.; Abbrescia, M.; Maggi, G.; Donvito, G.; Caballero, J.; Sanches, J. A.; Kavka, C.; Lingen, F. v.; Bacchi, W.; Codispoti, G.; Elmer, P.; Eulisse, G.; Lazaridis, C.; Kalini, S.; Sarkar, S.; Hammad, G.

    2008-07-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.

  9. Modeling multileaf collimators with the PEREGRINE Monte Carlo

    SciTech Connect

    Albright, N; Fujino, D H; J Wieczorek

    1999-03-01

    Multileaf collimators (MLCs) are becoming increasingly important for beam shaping and intensity modulated radiation therapy (IMRT). Their unique design can introduce subtle effects in the patient/phantom dose distribution. The PEREGRINE 3D Monte Carlo dose calculation system predicts dose by implementing a full Monte Carlo simulation of the beam delivery and patient/phantom system. As such, it provides a powerful tool to explore dosimetric effects of MLC designs. We have installed a new MLC modeling package into PEREGRINE. This package simulates full photon and electron transport in the MLC and includes tongue-and-groove construction and curved or straight leaf ends in the leaf shape geometry. We tested the accuracy of the PEREGRINE MLC package by comparing PEREGRINE predictions with ion chamber, diode, and photographic film measurements taken with a Varian 2 1 OOC using 6 and 18 MV photon beams. Profile and depth dose measurements were made for the MLC configured into annulus and comb patterns. In all cases, PEREGRINE modeled these measurements to within experimental uncertainties. Our results demonstrate PEREGRINE's accuracy for modeling MLC characteristics, and suggest that PEREGRINE would be an ideal tool to explore issues such as (1) underdosing between leaves due to the ''tongue-and-groove'' effect when dose from multiple MLC patterns are added together, (2) radiation leakage in the bullnose region, and (3) dose under a single leaf due to scatter in the patient.

  10. On the time scale associated with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Bal, Kristof M.; Neyts, Erik C.

    2014-11-01

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  11. Variational Monte Carlo investigation of SU (N ) Heisenberg chains

    NASA Astrophysics Data System (ADS)

    Dufour, Jérôme; Nataf, Pierre; Mila, Frédéric

    2015-05-01

    Motivated by recent experimental progress in the context of ultracold multicolor fermionic atoms in optical lattices, we have investigated the properties of the SU (N) Heisenberg chain with totally antisymmetric irreducible representations, the effective model of Mott phases with m Monte Carlo based on Gutzwiller projected fermionic wave functions, we have been able to verify these predictions for a representative number of cases with N ≤10 and m ≤N /2 , and we have shown that the opening of a gap is associated to a spontaneous dimerization or trimerization depending on the value of m and N . We have also investigated the marginal cases where Abelian bosonization did not lead to any prediction. In these cases, variational Monte Carlo predicts that the ground state is critical with exponents consistent with conformal field theory.

  12. Dynamical Monte Carlo methods for plasma-surface reactions

    NASA Astrophysics Data System (ADS)

    Guerra, Vasco; Marinov, Daniil

    2016-08-01

    Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir-Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.

  13. Monte Carlo field-theoretic simulations of a homopolymer blend

    NASA Astrophysics Data System (ADS)

    Spencer, Russell; Matsen, Mark

    Fluctuation corrections to the macrophase segregation transition (MST) in a symmetric homopolymer blend are examined using Monte Carlo field-theoretic simulations (MC-FTS). This technique involves treating interactions between unlike monomers using standard Monte-Carlo techniques, while enforcing incompressibility as is done in mean-field theory. When using MC-FTS, we need to account for a UV divergence. This is done by renormalizing the Flory-Huggins interaction parameter to incorporate the divergent part of the Hamiltonian. We compare different ways of calculating this effective interaction parameter. Near the MST, the length scale of compositional fluctuations becomes large, however, the high computational requirements of MC-FTS restrict us to small system sizes. We account for these finite size effects using the method of Binder cumulants, allowing us to locate the MST with high precision. We examine fluctuation corrections to the mean field MST, χN = 2 , as they vary with the invariant degree of polymerization, N =ρ2a6 N . These results are compared with particle-based simulations as well as analytical calculations using the renormalized one loop theory. This research was funded by the Center for Sustainable Polymers.

  14. Monte Carlo simulation of zinc protoporphyrin fluorescence in the retina

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoyan; Lane, Stephen

    2010-02-01

    We have used Monte Carlo simulation of autofluorescence in the retina to determine that noninvasive detection of nutritional iron deficiency is possible. Nutritional iron deficiency (which leads to iron deficiency anemia) affects more than 2 billion people worldwide, and there is an urgent need for a simple, noninvasive diagnostic test. Zinc protoporphyrin (ZPP) is a fluorescent compound that accumulates in red blood cells and is used as a biomarker for nutritional iron deficiency. We developed a computational model of the eye, using parameters that were identified either by literature search, or by direct experimental measurement to test the possibility of detecting ZPP non-invasively in retina. By incorporating fluorescence into Steven Jacques' original code for multi-layered tissue, we performed Monte Carlo simulation of fluorescence in the retina and determined that if the beam is not focused on a blood vessel in a neural retina layer or if part of light is hitting the vessel, ZPP fluorescence will be 10-200 times higher than background lipofuscin fluorescence coming from the retinal pigment epithelium (RPE) layer directly below. In addition we found that if the light can be focused entirely onto a blood vessel in the neural retina layer, the fluorescence signal comes only from ZPP. The fluorescence from layers below in this second situation does not contribute to the signal. Therefore, the possibility that a device could potentially be built and detect ZPP fluorescence in retina looks very promising.

  15. Monte Carlo simulations and dosimetric studies of an irradiation facility

    NASA Astrophysics Data System (ADS)

    Belchior, A.; Botelho, M. L.; Vaz, P.

    2007-09-01

    There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.

  16. Monte Carlo parameter studies and uncertainty analyses with MCNP5

    SciTech Connect

    Brown, F. B.; Sweezy, J. E.; Hayes, R. B.

    2004-01-01

    A software tool called mcnp-pstudy has been developed to automate the setup, execution, and collection of results from a series of MCNPS Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNPS jobs must be performed with varying problem input specifications. Monte Carlo codes are being used for a wide variety of applications today due to their accurate physical modeling and the speed of today's computers. In most applications for design work, experiment analysis, and benchmark calculations, it is common to run many calculations, not just one, to examine the effects of design tolerances, experimental uncertainties, or variations in modeling features. We have developed a software tool for use with MCNP5 to automate this process. The tool, mcnp-pstudy, is used to automate the operations of preparing a series of MCNP5 input files, running the calculations, and collecting the results. Using this tool, parameter studies, total uncertainty analyses, or repeated (possibly parallel) calculations with MCNP5 can be performed easily. Essentially no extra user setup time is required beyond that of preparing a single MCNP5 input file.

  17. Monte Carlo study of electron transport in monolayer silicene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2016-11-01

    Electron mobility and diffusion coefficients in monolayer silicene are calculated by Monte Carlo simulations using simplified band structure with linear energy bands. Results demonstrate reasonable agreement with the full-band Monte Carlo method in low applied electric field conditions. Negative differential resistivity is observed and an explanation of the origin of this effect is proposed. Electron mobility and diffusion coefficients are studied in low applied electric field conditions. We demonstrate that a comparison of these parameter values can provide a good check that the calculation is correct. Low-field mobility in silicene exhibits {T}-3 temperature dependence for nondegenerate electron gas conditions and {T}-1 for higher electron concentrations, when degenerate conditions are imposed. It is demonstrated that to explain the relation between mobility and temperature in nondegenerate electron gas the linearity of the band structure has to be taken into account. It is also found that electron-electron scattering only slightly modifies low-field electron mobility in degenerate electron gas conditions.

  18. Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters

    SciTech Connect

    Brandt, D.; Asai, M.; Brink, P.L.; Cabrera, B.; Silva, E.do Couto e; Kelsey, M.; Leman, S.W.; McArthy, K.; Resch, R.; Wright, D.; Figueroa-Feliciano, E.; /MIT

    2012-06-12

    There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature, focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.

  19. The ATLAS Fast Monte Carlo Production Chain Project

    NASA Astrophysics Data System (ADS)

    Jansky, Roland

    2015-12-01

    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track reconstruction, a Monte Carlo generator information based trajectory building has been deployed with the aim of bypassing the CPU intensive pattern recognition. Together with the ISF all components have been integrated into a new fast MC production chain, aiming to produce fast MC simulated data with sufficient agreement with fully simulated and reconstructed data at a processing time of seconds per event, compared to several minutes for full simulation.

  20. Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Dichmann, Donald J.; Alberding, Cassandra M.; Yu, Wayne H.

    2014-01-01

    The James Webb Space Telescope (JWST) is scheduled to launch in 2018 into a Libration Point Orbit (LPO) around the Sun-Earth/Moon (SEM) L2 point, with a planned mission lifetime of 10.5 years after a six-month transfer to the mission orbit. This paper discusses our approach to Stationkeeping (SK) maneuver planning to determine an adequate SK delta-V budget. The SK maneuver planning for JWST is made challenging by two factors: JWST has a large Sunshield, and JWST will be repointed regularly producing significant changes in Solar Radiation Pressure (SRP). To accurately model SRP we employ the Solar Pressure and Drag (SPAD) tool, which uses ray tracing to accurately compute SRP force as a function of attitude. As an additional challenge, the future JWST observation schedule will not be known at the time of SK maneuver planning. Thus there will be significant variation in SRP between SK maneuvers, and the future variation in SRP is unknown. We have enhanced an earlier SK simulation to create a Monte Carlo simulation that incorporates random draws for uncertainties that affect the budget, including random draws of the observation schedule. Each SK maneuver is planned to optimize delta-V magnitude, subject to constraints on spacecraft pointing. We report the results of the Monte Carlo simulations and discuss possible improvements during flight operations to reduce the SK delta-V budget.

  1. Monte Carlo simulation of quantum Zeno effect in the brain

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko

    2015-12-01

    Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved a theorem according to which local projections cannot decrease the von Neumann entropy of the unconditional brain density matrix. The latter theorem establishes that Stapp's model is physically implausible but leaves a door open for future development of quantum mind theories provided the brain has a decoherence-free subspace.

  2. A pure-sampling quantum Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-01

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  3. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling. PMID:27078480

  4. Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo

    SciTech Connect

    Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.

    2000-10-10

    Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.

  5. Path integral Monte Carlo on a lattice. II. Bound states

    NASA Astrophysics Data System (ADS)

    O'Callaghan, Mark; Miller, Bruce N.

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations.

  6. Path integral Monte Carlo on a lattice. II. Bound states.

    PubMed

    O'Callaghan, Mark; Miller, Bruce N

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations. PMID:27575090

  7. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  8. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  9. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as

  10. Fast Monte Carlo, slow protein kinetics and perfect loop closure

    NASA Astrophysics Data System (ADS)

    Wedemeyer, William Joseph

    This thesis presents experimental studies of proteins and computational methods which may help in simulations of proteins. The experimental chapters focus on the folding and unfolding of bovine pancreatic ribonuclease A. Methods are developed for tracking the cis-trans isomerization of individual prolines under folding and unfolding conditions, and for identifying critical folding structures by assessing the effects of individual incorrect X-Pro isomers on the conformational folding. The major β-hairpin region is identified as more critical than the C-terminal hydrophobic core. Site- directed mutagenesis of three nearby tyrosines to phenylalanine indicates that tyrosyl hydrogen bonds are essential to rapid conformational folding. Another experimental chapter presents an analytic solution of the kinetics of competitive binding, which is applied to estimating the association and dissociation rate constants of hirudin and thrombin. An extension of this method is proposed to obtain kinetic rate constants for the conformational folding and unfolding of individual parts of a protein. The analytic solution is found to be roughly one-hundred-fold more efficient than the best numerical integrators. The theoretical chapters present methods potentially useful in protein simulations. The loop closure problem is solved geometrically, allowing the protein to be broken into segments which move quasi-independently. Two bootstrap Monte Carlo methods are developed for sampling functions that are characterized by high anisotropy, e.g. long, narrow valleys. Two chapters are devoted to smoothing methods; the first develops a method for exploiting smoothing to evaluate the energy in order N (not N2) time, while the second examines the limitations of one smoothing method, the Diffusion Equation Method, and suggests improvements to its smoothing transformation and reversing procedure. One chapter develops a highly optimized simulation package for lattice heteropolymers by careful choice

  11. "Full Model" Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bauge, E.

    2015-01-01

    The "Full model" evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the "Full model" evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  12. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  13. Sequential Monte Carlo samplers for semi-linear inverse problems and application to magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Sommariva, Sara; Sorrentino, Alberto

    2014-11-01

    We discuss the use of a recent class of sequential Monte Carlo methods for solving inverse problems characterized by a semi-linear structure, i.e. where the data depend linearly on a subset of variables and nonlinearly on the remaining ones. In this type of problems, under proper Gaussian assumptions one can marginalize the linear variables. This means that the Monte Carlo procedure needs only to be applied to the nonlinear variables, while the linear ones can be treated analytically; as a result, the Monte Carlo variance and/or the computational cost decrease. We use this approach to solve the inverse problem of magnetoencephalography, with a multi-dipole model for the sources. Here, data depend nonlinearly on the number of sources and their locations, and depend linearly on their current vectors. The semi-analytic approach enables us to estimate the number of dipoles and their location from a whole time-series, rather than a single time point, while keeping a low computational cost.

  14. Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean

    2009-06-01

    Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).

  15. Monte Carlo simulation of high-field transport equations

    SciTech Connect

    Abdolsalami, F.

    1989-01-01

    The author has studied the importance of the intracollisional field effect in the quantum transport equation derived by Khan, Davies and Wilkins (Phys. Rev. B36, 2578(1987)) via Monte Carlo simulations. This transport equation is identical to the integral form of the Boltzmann transport equation except that the scattering-in rates contain the auxiliary function of energy width {radical}{vert bar}{alpha}{vert bar} instead of the sharp delta function of the semiclassical theory where {alpha} = {pi}{h bar}{sup 2} e/m* E {center dot} q. Here, E is the electric field, q is the phonon wave vector of m* is the effective mass. The transport equation studied corresponds to a single parabolic band of infinite width and is valid in the field dominated limit, i.e. {radical}{vert bar}{alpha}{vert bar} {much gt} h/{tau}{sub sc}, where {tau}{sup {minus}1} is the electron scattering-out rate. In his simulation, he takes the single parabolic band to be the central valley of GaAs with transition to higher valleys shut off. Electrons are assumed to scatter with polar optic and acoustic phonons with the scattering parameters chosen to simulate GaAs. The loss of intervalley scattering mechanism for high electric fields is compensated for by increasing each of the four scattering rates relative to the real values in GaAs by a factor {gamma}. The transport equation studied contains the auxilliary function which is not positive definite. Therefore, it can not represent a probability of scattering in a Monte Carlo simulation. The question whether or not intracollisional field effect is important can be resolved by replacing the nonpositive definite auxilliary function by a test positive definite function of width {radical}{vert bar}{alpha}{vert bar} and comparing the results of the Monte Carlo simulation of this quantum transport equation with those of the Boltzmann transport equation. If the results are identical, the intracollisional field effect is not important.

  16. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    NASA Astrophysics Data System (ADS)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear

  17. Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data

    SciTech Connect

    Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André

    2014-04-15

    Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions

  18. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    SciTech Connect

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-07-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  19. The macro response Monte Carlo method for electron transport

    SciTech Connect

    Svatos, M M

    1998-09-01

    The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could be tested. Most

  20. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    SciTech Connect

    Pérez-Andújar, Angélica; Zhang, Rui; Newhauser, Wayne

    2013-12-15

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w{sub R}, as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w{sub R} was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w{sub R} which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis.

  1. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  2. Markov Chain Monte-Carlo Models of Starburst Clusters

    NASA Astrophysics Data System (ADS)

    Melnick, Jorge

    2015-01-01

    There are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are not a priori known, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.

  3. Monte Carlo estimation of the number of tatami tilings

    NASA Astrophysics Data System (ADS)

    Kimura, Kenji; Higuchi, Saburo

    2016-04-01

    Motivated by the way Japanese tatami mats are placed on the floor, we consider domino tilings with a constraint and estimate the number of such tilings of plane regions. We map the system onto a monomer-dimer model with a novel local interaction on the dual lattice. We make use of a variant of the Hamiltonian replica exchange Monte Carlo method where data for ferromagnetic and anti-ferromagnetic models are combined to make a single family of histograms. The properties of the density of states is studied beyond exact enumeration and combinatorial methods. The logarithm of the number of the tilings is linear in the boundary length of the region for all the regions studied.

  4. Monte Carlo mitochondrial dosimetry and microdosimetry of 131I.

    PubMed

    Carrillo-Cázares, Tomás A; Torres-García, Eugenio

    2013-01-01

    A mitochondrion is an organelle found in most eukaryotic cells, which produces most of the energy needed by a living cell. It has been shown that ionising radiation causes mitochondrial damage leading to apoptosis or cell death. The aim of this work was to calculate, by Monte Carlo simulation, the specific energy (z) into the mitochondria, due to Auger electrons, conversion electrons and beta emission from (131)I, where the radionuclide was carried by a vector to the cell surface and the surrounding environment. A concentric spherical geometry represents a cell and its nucleus. Three different volumes were used to represent the mitochondria; they were placed in random positions within the cytoplasm. The z produced by a single event is due to low-energy electrons (76 %) and beta particles (24 %) and the mitochondria receive a total mean z two orders of magnitude higher than that of the cell nucleus.

  5. Optimization of Monte Carlo transport simulations in stochastic media

    SciTech Connect

    Liang, C.; Ji, W.

    2012-07-01

    This paper presents an accurate and efficient approach to optimize radiation transport simulations in a stochastic medium of high heterogeneity, like the Very High Temperature Gas-cooled Reactor (VHTR) configurations packed with TRISO fuel particles. Based on a fast nearest neighbor search algorithm, a modified fast Random Sequential Addition (RSA) method is first developed to speed up the generation of the stochastic media systems packed with both mono-sized and poly-sized spheres. A fast neutron tracking method is then developed to optimize the next sphere boundary search in the radiation transport procedure. In order to investigate their accuracy and efficiency, the developed sphere packing and neutron tracking methods are implemented into an in-house continuous energy Monte Carlo code to solve an eigenvalue problem in VHTR unit cells. Comparison with the MCNP benchmark calculations for the same problem indicates that the new methods show considerably higher computational efficiency. (authors)

  6. Last-passage Monte Carlo algorithm for mutual capacitance.

    PubMed

    Hwang, Chi-Ok; Given, James A

    2006-08-01

    We develop and test the last-passage diffusion algorithm, a charge-based Monte Carlo algorithm, for the mutual capacitance of a system of conductors. The first-passage algorithm is highly efficient because it is charge based and incorporates importance sampling; it averages over the properties of Brownian paths that initiate outside the conductor and terminate on its surface. However, this algorithm does not seem to generalize to mutual capacitance problems. The last-passage algorithm, in a sense, is the time reversal of the first-passage algorithm; it involves averages over particles that initiate on an absorbing surface, leave that surface, and diffuse away to infinity. To validate this algorithm, we calculate the mutual capacitance matrix of the circular-disk parallel-plate capacitor and compare with the known numerical results. Good agreement is obtained.

  7. Radiographic Capabilities of the MERCURY Monte Carlo Code

    SciTech Connect

    McKinley, M S; von Wittenau, A

    2008-04-07

    MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. Recently, a radiographic capability has been added. MERCURY can create a source of diagnostic, virtual particles that are aimed at pixels in an image tally. This new feature is compared to the radiography code, HADES, for verification and timing. Comparisons for accuracy were made using the French Test Object and for timing were made by tracking through an unstructured mesh. In addition, self consistency tests were run in MERCURY for the British Test Object and scattering test problem. MERCURY and HADES were found to agree to the precision of the input data. HADES appears to run around eight times faster than the MERCURY in the timing study. Profiling the MERCURY code has turned up several differences in the algorithms which account for this. These differences will be addressed in a future release of MERCURY.

  8. Effect of doping of graphene structure: A Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Masrour, R.; Jabar, A.

    2016-10-01

    In this work, we have studied the effect of magnetic atom doping of graphene structure using Monte Carlo simulation. The reduced critical temperature with the magnetic atom doping x has been deduced from the thermal variation of magnetization and magnetic susceptibility. The variation of magnetization versus the crystal field of grapheme structure for different x and for different reduced temperatures has been established. We also have measured the coercive field (hC) as a function x in grapheme structure, finding that hC increases with increasing x concentration as predicted experimentally. The doping-induced magnetism in graphene. Magnetically atom doping in graphene systems are potential candidates for application in future spintronic devices, magnetometry requires macroscopic quantities of graphene to detect magnetic moments directly.

  9. bhlight: GENERAL RELATIVISTIC RADIATION MAGNETOHYDRODYNAMICS WITH MONTE CARLO TRANSPORT

    SciTech Connect

    Ryan, B. R.; Gammie, C. F.; Dolence, J. C.

    2015-07-01

    We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and to a slowly accreting Kerr black hole in axisymmetry.

  10. bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport

    DOE PAGES

    Ryan, Benjamin R; Dolence, Joshua C.; Gammie, Charles F.

    2015-06-25

    We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and tomore » a slowly accreting Kerr black hole in axisymmetry.« less

  11. Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

    2013-02-01

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  12. Monte-Carlo Continuous Energy Burnup Code System.

    2007-08-31

    Version 00 MCB is a Monte Carlo Continuous Energy Burnup Code for a general-purpose use to calculate a nuclide density time evolution with burnup or decay. It includes eigenvalue calculations of critical and subcritical systems as well as neutron transport calculations in fixed source mode or k-code mode to obtain reaction rates and energy deposition that are necessary for burnup calculations. The MCB-1C patch file and data packages as distributed by the NEADB are verymore » well organized and are being made available through RSICC as received. The RSICC package includes the MCB-1C patch and MCB data libraries. Installation of MCB requires MCNP4C source code and utility programs, which are not included in this MCB distribution. They were provided with the now obsolete CCC-700/MCNP-4C package.« less

  13. Monte Carlo studies of ordering in nitride ternary alloys

    NASA Astrophysics Data System (ADS)

    Łopuszyński, Michał; Majewski, Jacek A.

    2013-12-01

    We present exemplary results of extensive theoretical studies that shed light on the longstanding problem of composition fluctuations in nitride alloys. We analyze short- and long-range ordering (SRO and LRO on the zinc-blende cationic sublattice in bulk nitride ternary alloys, GaInN, AlInN, and AlGaN. This comprehensive analysis is based on Monte Carlo calculations within the NVT ensemble, covers the whole range of concentrations, and temperatures from 873 K up to 1673 K. It turns out that for In containing alloys (i.e., GaInN and AlInN), the considerable degree of SRO occurs as quantified by Warren-Cowley short-range order parameter. On the contrary, for AlGaN alloy, any kind of short-range order is negligible. Also, we do not observe any long range ordering for all three alloys studied.

  14. A study of Monte Carlo radiative transfer through fractal clouds

    SciTech Connect

    Gautier, C.; Lavallec, D.; O`Hirok, W.; Ricchiazzi, P.

    1996-04-01

    An understanding of radiation transport (RT) through clouds is fundamental to studies of the earth`s radiation budget and climate dynamics. The transmission through horizontally homogeneous clouds has been studied thoroughly using accurate, discreet ordinates radiative transfer models. However, the applicability of these results to general problems of global radiation budget is limited by the plane parallel assumption and the fact that real clouds fields show variability, both vertically and horizontally, on all size scales. To understand how radiation interacts with realistic clouds, we have used a Monte Carlo radiative transfer model to compute the details of the photon-cloud interaction on synthetic cloud fields. Synthetic cloud fields, generated by a cascade model, reproduce the scaling behavior, as well as the cloud variability observed and estimated from cloud satellite data.

  15. Combining four Monte Carlo estimators for radiation momentum deposition

    SciTech Connect

    Urbatsch, Todd J; Hykes, Joshua M

    2010-11-18

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the gain in FOM is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10-20% greater than any of the solo estimators FOM. In addition, the numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions.

  16. Vector and parallel Monte Carlo radiative heat transfer simulation

    SciTech Connect

    Burns, P.J. . Dept. of Mechanical Engineering); Pryor, D.V. )

    1989-01-01

    A fully vectorized version of a Monte Carlo algorithm of radiative heat transfer in two-dimensional geometries is presented. This algorithm differs from previous applications in that its capabilities are more extensive, with arbitrary numbers of surfaces, arbitrary numbers of material properties, and surface characteristics that include transmission, specular reflection, and diffuse reflection (all of which may be functions of the angle of incidence). The algorithm is applied to an irregular, experimental geometry and implemented on a Cyber 205. A speedup factor of approximately 16, for this combination of geometry and material properties, is achieved for the vector version over the scalar code. Issues related to the details of vectorization, including heavy use of bit addressability, the maintaining of long vector lengths, and gather/scatter use, are discussed. The parallel application of this algorithm is straightforward and is discussed in light of architectural differences among several current supercomputers.

  17. Residual entropy of ice III from Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Kolafa, Jiří

    2016-03-01

    We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.

  18. Stationarity and source convergence in monte carlo criticality calculation.

    SciTech Connect

    Ueki, T.; Brown, F. B.

    2002-01-01

    In Monte Carlo (MC) criticality calculations, source error propagation through the stationary cycles and source convergcnce in the settling (inactive) cycles are both dominated by the dominance ratio (DR) of fission kernels, Le., the ratio of the second largest to largest eigenvalues. For symmetric two fissile component systems with DR close to unity, the extinction of fission source sites can occur in one of the components even when the initial source is symmetric and the number of histories per cycle is larger than one thousand. When such a system is made slightly asymmetric, the neutron effective multiplication factor (kern) at the inactive cycles does not reflect the convergence to stationary source distribution. To overcome this problem, relative entropy (Kullback Leibler distance) is applied to a slightly asymmetric two fissile component problem with a dominance ratio of 0.9925. Numerical results show that relative entropy is effective as a posterior diagnostic tool.

  19. Monte Carlo Studies of Matrix Theory Correlation Functions

    SciTech Connect

    Hanada, Masanori; Nishimura, Jun; Sekino, Yasuhiro; Yoneya, Tamiaki

    2010-04-16

    We study correlation functions in (0+1)-dimensional maximally supersymmetric U(N) gauge theory, which represents the low-energy effective theory of D0-branes. In the large-N limit, the gauge-gravity duality predicts power-law behaviors in the infrared region for the two-point correlation functions of operators corresponding to supergravity modes. We evaluate such correlation functions on the gauge theory side by the Monte Carlo method. Clear power-law behaviors are observed at N=3, and the predicted exponents are confirmed consistently. Our results suggest that the agreement extends to the M-theory regime, where the supergravity analysis in 10 dimensions may not be justified a priori.

  20. Faking and the validity of conscientiousness: a Monte Carlo investigation.

    PubMed

    Komar, Shawn; Brown, Douglas J; Komar, Jennifer A; Robie, Chet

    2008-01-01

    The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] x 5 [proportion] x 4 [variability] x 3 [faking-Conscientiousness relationship] x 3 [faking-performance relationship] x 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection. PMID:18211141