Science.gov

Sample records for combining monte carlo

  1. Successful combination of the stochastic linearization and Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Elishakoff, I.; Colombi, P.

    1993-01-01

    A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

  2. A Monte Carlo method for combined segregation and linkage analysis

    SciTech Connect

    Guo, S.W. ); Thompson, E.A. )

    1992-11-01

    The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.

  3. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  4. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    NASA Astrophysics Data System (ADS)

    Rodriguez, M.; Sempau, J.; Brualla, L.

    2012-05-01

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code \\scriptsize{{PENELOPE}} and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code \\scriptsize{{PENELOPE}}. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45.

  5. Monte Carlo Example Programs

    SciTech Connect

    Kalos, M.

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  6. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  7. Multiplatform application for calculating a combined standard uncertainty using a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niewinski, Marek; Gurnecki, Pawel

    2016-12-01

    The paper presents a new computer program for calculating a combined standard uncertainty. It implements the algorithm described in JCGM 101:20081 which is concerned with the use of a Monte Carlo method as an implementation of the propagation of distributions for uncertainty evaluation. The accuracy of the calculation has been obtained by using the high quality random number generators. The paper describes the main principles of the program and compares the obtained result with example problems presented in JCGM Supplement 1.

  8. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    NASA Astrophysics Data System (ADS)

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-01

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R&D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  9. Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS

    SciTech Connect

    Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao; Matsumura, Akira

    2009-07-25

    A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus application of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.

  10. Combining reactive and configurational-bias Monte Carlo: Confinement influence on the propene metathesis reaction system in various zeolites

    NASA Astrophysics Data System (ADS)

    Jakobtorweihen, S.; Hansen, N.; Keil, F. J.

    2006-12-01

    In order to efficiently calculate chemical equilibria of large molecules in a confined environment the reactive Monte Carlo technique is combined with the configurational-bias Monte Carlo approach. To prove that detailed balance is fulfilled the acceptance rule for this combination of particular Monte Carlo techniques is derived in detail. Notably, by using this derivation all other acceptance rules of any Monte Carlo trial moves usually carried out in combination with the configurational-bias Monte Carlo approach can be deduced from it. As an application of the combination of reactive and configurational-bias Monte Carlo the influence of different zeolitic confinements (MFI, TON, LTL, and FER) on the reaction equilibrium and the selectivity of the propene metathesis reaction system was investigated. Compared to the bulk phase the conversion is increased significantly. The authors study this reaction system in the temperature range between 300 and 600K, and the pressure range from 1to7bars. In contrast to the bulk phase, pressure and temperature have a strong influence on the composition of the reaction mixture in confinement. At low pressures and temperatures both conversion and selectivity are highest. Furthermore, the equilibrium composition is strongly dependent on the type of zeolite. This demonstrates the important role of the host structure in catalytic systems.

  11. Influence of combining rules on the cavity occupancy of clathrate hydrates by Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Papadimitriou, Nikolaos I.; Tsimpanogiannis, Ioannis N.; Economou, Ioannis G.; Stubos, Athanassios K.

    2014-09-01

    Assessing the exact amount of gas stored in clathrate-hydrate structures can be addressed by either molecular-level simulations (e.g. Monte Carlo) or continuum-level modelling (e.g. van der Waals-Platteeuw-theory-based models). In either case, the Lorentz-Berthelot (LB) combining rules are by far the most common approach for the evaluation of the parameters between the different types of atoms that form the hydrate structure. The effect of combining rules on the calculations has not been addressed adequately in the hydrate-related literature. Only recently the use of the LB combining rules in hydrate studies has been questioned. In the current study, we report an extensive series of Grand Canonical Monte Carlo simulations along the three-phase (H-Lw-V) equilibrium curve. The exact geometry of hydrate crystals is known from diffraction experiments and, therefore, the formation of hydrates can be simulated as a process of gas adsorption in a solid porous material. We examine the effect of deviations from the LB combining rules on the cavity occupancy of argon hydrates and work towards quantifying it. The specific system is selected as a result of the characteristic behaviour of argon to form hydrates of different structures depending on the prevailing pressure. In particular, an sII hydrate is formed at lower pressures, while an sI hydrate is formed at intermediate pressures, and finally an sH hydrate is formed at higher pressures.

  12. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  13. Fundamentals of Monte Carlo

    SciTech Connect

    Wollaber, Allan Benton

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  14. MCFET - A MICROSTRUCTURAL LATTICE MODEL FOR STRAIN ORIENTED PROBLEMS: A COMBINED MONTE CARLO FINITE ELEMENT TECHNIQUE

    NASA Technical Reports Server (NTRS)

    Gayda, J.

    1994-01-01

    A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, has been developed to simulate microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. Since many of the physical properties of materials are determined by microstructure, it is important to be able to predict and control microstructural development. MCFET uses a microstructural lattice model that can incorporate all relevant driving forces and kinetic considerations. Unlike molecular dynamics, this approach was developed specifically to predict macroscopic behavior, not atomistic behavior. In this approach, the microstructure is discretized into a fine lattice. Each element in the lattice is labeled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis has been validated by comparing this approach with a closed-form, analytical method for stress-assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analyses for multiparticle problems have also been run and, in general, the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperatures. This program is written in FORTRAN for use on a 370 series IBM mainframe. It has been implemented on an IBM 370 running VM/SP and an IBM 3084 running MVS. It requires the IMSL math library and 220K of RAM for execution. The standard distribution medium for this program is a 9-track 1600 BPI magnetic tape in EBCDIC format.

  15. Monte Carlo eikonal scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Dedonder, J. P.

    2012-08-01

    Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

  16. Optimized molecular reconstruction procedure combining hybrid reverse Monte Carlo and molecular dynamics

    SciTech Connect

    Bousige, Colin; Boţan, Alexandru; Coasne, Benoît; Ulm, Franz-Josef; Pellenq, Roland J.-M.

    2015-03-21

    We report an efficient atom-scale reconstruction method that consists of combining the Hybrid Reverse Monte Carlo algorithm (HRMC) with Molecular Dynamics (MD) in the framework of a simulated annealing technique. In the spirit of the experimentally constrained molecular relaxation technique [Biswas et al., Phys. Rev. B 69, 195207 (2004)], this modified procedure offers a refined strategy in the field of reconstruction techniques, with special interest for heterogeneous and disordered solids such as amorphous porous materials. While the HRMC method generates physical structures, thanks to the use of energy penalties, the combination with MD makes the method at least one order of magnitude faster than HRMC simulations to obtain structures of similar quality. Furthermore, in order to ensure the transferability of this technique, we provide rational arguments to select the various input parameters such as the relative weight ω of the energy penalty with respect to the structure optimization. By applying the method to disordered porous carbons, we show that adsorption properties provide data to test the global texture of the reconstructed sample but are only weakly sensitive to the presence of defects. In contrast, the vibrational properties such as the phonon density of states are found to be very sensitive to the local structure of the sample.

  17. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  18. Fitting complex population models by combining particle filters with Markov chain Monte Carlo.

    PubMed

    Knape, Jonas; de Valpine, Perry

    2012-02-01

    We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm.

  19. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  20. Mechanism of Kinetically Controlled Capillary Condensation in Nanopores: A Combined Experimental and Monte Carlo Approach.

    PubMed

    Hiratsuka, Tatsumasa; Tanaka, Hideki; Miyahara, Minoru T

    2017-01-24

    We find the rule of capillary condensation from the metastable state in nanoscale pores based on the transition state theory. The conventional thermodynamic theories cannot achieve it because the metastable capillary condensation inherently includes an activated process. We thus compute argon adsorption isotherms on cylindrical pore models and atomistic silica pore models mimicking the MCM-41 materials by the grand canonical Monte Carlo and the gauge cell Monte Carlo methods and evaluate the rate constant for the capillary condensation by the transition state theory. The results reveal that the rate drastically increases with a small increase in the chemical potential of the system, and the metastable capillary condensation occurs for any mesopores when the rate constant reaches a universal critical value. Furthermore, a careful comparison between experimental adsorption isotherms and the simulated ones on the atomistic silica pore models reveals that the rate constant of the real system also has a universal value. With this finding, we can successfully estimate the experimental capillary condensation pressure over a wide range of temperatures and pore sizes by simply applying the critical rate constant.

  1. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  2. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  3. Wormhole Hamiltonian Monte Carlo.

    PubMed

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2014-07-31

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  4. Isotropic Monte Carlo Grain Growth

    SciTech Connect

    Mason, J.

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  5. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  6. Robust Scale Adaptive Tracking by Combining Correlation Filters with Sequential Monte Carlo

    PubMed Central

    Ma, Junkai; Luo, Haibo; Hui, Bin; Chang, Zheng

    2017-01-01

    A robust and efficient object tracking algorithm is required in a variety of computer vision applications. Although various modern trackers have impressive performance, some challenges such as occlusion and target scale variation are still intractable, especially in the complex scenarios. This paper proposes a robust scale adaptive tracking algorithm to predict target scale by a sequential Monte Carlo method and determine the target location by the correlation filter simultaneously. By analyzing the response map of the target region, the completeness of the target can be measured by the peak-to-sidelobe rate (PSR), i.e., the lower the PSR, the more likely the target is being occluded. A strict template update strategy is designed to accommodate the appearance change and avoid template corruption. If the occlusion occurs, a retained scheme is allowed and the tracker refrains from drifting away. Additionally, the feature integration is incorporated to guarantee the robustness of the proposed approach. The experimental results show that our method outperforms other state-of-the-art trackers in terms of both the distance precision and overlap precision on the publicly available TB-50 dataset. PMID:28273840

  7. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  8. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  9. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  10. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  11. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  12. Suitable Candidates for Monte Carlo Solutions.

    ERIC Educational Resources Information Center

    Lewis, Jerome L.

    1998-01-01

    Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)

  13. A Classroom Note on Monte Carlo Integration.

    ERIC Educational Resources Information Center

    Kolpas, Sid

    1998-01-01

    The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)

  14. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  15. PARAMETRIC STUDY OF TISSUE OPTICAL CLEARING BY LOCALIZED MECHANICAL COMPRESSION USING COMBINED FINITE ELEMENT AND MONTE CARLO SIMULATION.

    PubMed

    Vogt, William C; Shen, Haiou; Wang, Ge; Rylander, Christopher G

    2010-07-01

    Tissue Optical Clearing Devices (TOCDs) have been shown to increase light transmission through mechanically compressed regions of naturally turbid biological tissues. We hypothesize that zones of high compressive strain induced by TOCD pins produce localized water displacement and reversible changes in tissue optical properties. In this paper, we demonstrate a novel combined mechanical finite element model and optical Monte Carlo model which simulates TOCD pin compression of an ex vivo porcine skin sample and modified spatial photon fluence distributions within the tissue. Results of this simulation qualitatively suggest that light transmission through the skin can be significantly affected by changes in compressed tissue geometry as well as concurrent changes in tissue optical properties. The development of a comprehensive multi-domain model of TOCD application to tissues such as skin could ultimately be used as a framework for optimizing future design of TOCDs.

  16. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    SciTech Connect

    Zillich, Robert E.

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbatively to a 2nd-order simulation.

  17. Monte Carlo Simulation of Plumes Spectral Emission

    DTIC Science & Technology

    2005-06-07

    Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer

  18. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  19. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  20. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  1. Combination of the Metropolis Monte Carlo and Lattice Statics method for geometry optimization of H-(Al)-ZSM-5.

    PubMed

    Pongsai, Suchaya

    2010-07-30

    In this article, the combination of the Metropolis Monte Carlo and Lattice Statics (MMC-LS) method is applied to perform the geometry optimization of crystalline aluminosilicate zeolite system in the presence of cationic species (H(+)), i.e., H-(Al)-ZSM-5. It has been proved that the MMC-LS method is very useful to allow H(+) ions in (Al)-ZSM-5 extra-framework to approach the global minimum energy sites. The crucial advantage of the combination MMC-LS method is that, in stead of simulating over thousands random configurations via the only LS method, the only one configuration is needed for the MMC-LS simulation to achieve the lowest energy configuration. Therefore, the calculation time can be substantially reduced via the performance of the MMC-LS method with respect to the only LS method. The calculated results obtained from the MMC-LS and the only LS methods have been comparatively represented in terms of the thermodynamic and structural properties.

  2. Recovering intrinsic fluorescence by Monte Carlo modeling.

    PubMed

    Müller, Manfred; Hendriks, Benno H W

    2013-02-01

    We present a novel way to recover intrinsic fluorescence in turbid media based on Monte Carlo generated look-up tables and making use of a diffuse reflectance measurement taken at the same location. The method has been validated on various phantoms with known intrinsic fluorescence and is benchmarked against photon-migration methods. This new method combines more flexibility in the probe design with fast reconstruction and showed similar reconstruction accuracy as found in other reconstruction methods.

  3. Semistochastic Projector Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.

    2012-12-01

    We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.

  4. CO adsorption on W(100) during temperature-programmed desorption: A combined density functional theory and kinetic Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Albao, Marvin A.; Padama, Allan Abraham B.

    2017-02-01

    Using a combined density functional theory (DFT) and kinetic Monte Carlo (KMC) simulations, we study the adsorption at 800 K and subsequent desorption of CO on W(100) at higher temperatures. The resulting TPD profiles are known experimentally to exhibit three desorption peaks β1, β2, and β3 at 930 K, 1070 K, and 1375 K, respectively. Unlike more recent theoretical studies that propose that all three aforementioned peaks are molecularly rather than associatively desorbed, our KMC analyses are in support of the latter, since at 800 K dissociation is facile and that CO exists as dissociation fragments C and O. We show that these peaks arise from desorption from the same adsorption site but whose binding energy varies depending on local environment, that is, the presence of CO as well as dissociation fragments C and O nearby. Furthermore we show that several key parameters, such as desorption, dissociation and recombination barriers all play a key role in the TPD spectra-these parameter effectively controls not only the location of the TPD peaks but the shape and width of the desorption peaks as well. Moreover, our KMC simulations reveal that varying the heating rate shifts the peaks but leaves their shape intact.

  5. Application of Mathematical Models in Combination with Monte Carlo Simulation for Prediction of Isoflurane Concentration in an Operation Room Theater

    PubMed Central

    ZARE SAKHVIDI, Mohammad Javad; BARKHORDARI, Abolfazl; SALEHI, Maryam; BEHDAD, Shekoofeh; FALLAHZADEH, Hossein

    2013-01-01

    Applicability of two mathematical models in inhalation exposure prediction (well mixed room and near field-far field model) were validated against standard sampling method in one operation room for isoflurane. Ninety six air samples were collected from near and far field of the room and quantified by gas chromatography-flame ionization detector. Isoflurane concentration was also predicted by the models. Monte Carlo simulation was used to incorporate the role of parameters variability. The models relatively gave more conservative results than the measurements. There was no significant difference between the models and direct measurements results. There was no difference between the concentration prediction of well mixed room model and near field far field model. It suggests that the dispersion regime in room was close to well mixed situation. Direct sampling showed that the exposure in the same room for same type of operation could be up to 17 times variable which can be incorporated by Monte Carlo simulation. Mathematical models are valuable option for prediction of exposure in operation rooms. Our results also suggest that incorporating the role of parameters variability by conducting Monte Carlo simulation can enhance the strength of prediction in occupational hygiene decision making. PMID:23912206

  6. Combining Total Monte Carlo and Benchmarks for Nuclear Data Uncertainty Propagation on a Lead Fast Reactor's Safety Parameters

    NASA Astrophysics Data System (ADS)

    Alhassan, E.; Sjöstrand, H.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.

    2014-04-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of 239Pu random ENDF-formatted libraries generated using the TALYS based system were processed into ACE format with NJOY-99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The mean of the keff distribution obtained was compared with the major nuclear data libraries, JEFF-3.1.1, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on χ2 values obtained using the 239Pu Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.

  7. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  8. A Markov chain Monte Carlo technique for identification of combinations of allelic variants underlying complex diseases in humans.

    PubMed

    Favorov, Alexander V; Andreewski, Timophey V; Sudomoina, Marina A; Favorova, Olga O; Parmigiani, Giovanni; Ochs, Michael F

    2005-12-01

    In recent years, the number of studies focusing on the genetic basis of common disorders with a complex mode of inheritance, in which multiple genes of small effect are involved, has been steadily increasing. An improved methodology to identify the cumulative contribution of several polymorphous genes would accelerate our understanding of their importance in disease susceptibility and our ability to develop new treatments. A critical bottleneck is the inability of standard statistical approaches, developed for relatively modest predictor sets, to achieve power in the face of the enormous growth in our knowledge of genomics. The inability is due to the combinatorial complexity arising in searches for multiple interacting genes. Similar "curse of dimensionality" problems have arisen in other fields, and Bayesian statistical approaches coupled to Markov chain Monte Carlo (MCMC) techniques have led to significant improvements in understanding. We present here an algorithm, APSampler, for the exploration of potential combinations of allelic variations positively or negatively associated with a disease or with a phenotype. The algorithm relies on the rank comparison of phenotype for individuals with and without specific patterns (i.e., combinations of allelic variants) isolated in genetic backgrounds matched for the remaining significant patterns. It constructs a Markov chain to sample only potentially significant variants, minimizing the potential of large data sets to overwhelm the search. We tested APSampler on a simulated data set and on a case-control MS (multiple sclerosis) study for ethnic Russians. For the simulated data, the algorithm identified all the phenotype-associated allele combinations coded into the data and, for the MS data, it replicated the previously known findings.

  9. Combining the Monotonic Lagrangian Grid with Direct Simulation Monte Carlo; a New Approach for Low-Density Flows.

    NASA Astrophysics Data System (ADS)

    Cybyk, Bohdan Zynowij

    Accurate and affordable numerical methods play a vital role in the design and development of practical hypersonic vehicles. Existing analysis tools that provide aerothermodynamic properties for the high-altitude portion of a hypersonic vehicle's flight path are capable but expensive. This dissertation describes the development, validation, and application of a new tool aimed at expanding the practical analysis envelope of high Knudsen number flows. The present work is the first to combine the Direct Simulation Monte Carlo (DSMC) methodology with a Lagrangian data structure, the Monotonic Lagrangian Grid (MLG). The result is a numerical approach for transition regime flows that automatically adjusts grid resolution to time-varying densities in the flow. The DSMC method is a direct particle simulation technique widely used in predicting flows of dilute gases. The MLG algorithm combines an efficient tracking and sorting scheme with a monotonic data structure for indexing and storing physical attributes of the moving particles. Monotonicity features of the MLG ensure that particles close in physical space are stored in adjacent array locations so that particle interactions may be restricted to a 'template' of near neighbors. The MLG templates provide a time-varying grid network that automatically adapts to local number densities within the flowfield. The majority of MLG and DSMC logic is inherently parallel, enabling extremely efficient application on parallel computer architectures. Computational advantages and disadvantages of this new MLG-based implementation are demonstrated by a series of test problems. The effects of MLG sorting parameters on computational performance are investigated, and results from sensitivity studies on grid size and template size are presented. The combination of DSMC and MLG results in a new tool with several significant benefits, including an automatically adapting grid, improved prediction accuracy for a given grid size, and decreased

  10. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  11. Parallel tempering Monte Carlo combined with clustering Euclidean metric analysis to study the thermodynamic stability of Lennard-Jones nanoclusters

    NASA Astrophysics Data System (ADS)

    Cezar, Henrique M.; Rondina, Gustavo G.; Da Silva, Juarez L. F.

    2017-02-01

    A basic requirement for an atom-level understanding of nanoclusters is the knowledge of their atomic structure. This understanding is incomplete if it does not take into account temperature effects, which play a crucial role in phase transitions and changes in the overall stability of the particles. Finite size particles present intricate potential energy surfaces, and rigorous descriptions of temperature effects are best achieved by exploiting extended ensemble algorithms, such as the Parallel Tempering Monte Carlo (PTMC). In this study, we employed the PTMC algorithm, implemented from scratch, to sample configurations of LJn (n =38 , 55, 98, 147) particles at a wide range of temperatures. The heat capacities and phase transitions obtained with our PTMC implementation are consistent with all the expected features for the LJ nanoclusters, e.g., solid to solid and solid to liquid. To identify the known phase transitions and assess the prevalence of various structural motifs available at different temperatures, we propose a combination of a Leader-like clustering algorithm based on a Euclidean metric with the PTMC sampling. This combined approach is further compared with the more computationally demanding bond order analysis, typically employed for this kind of problem. We show that the clustering technique yields the same results in most cases, with the advantage that it requires no previous knowledge of the parameters defining each geometry. Being simple to implement, we believe that this straightforward clustering approach is a valuable data analysis tool that can provide insights into the physics of finite size particles with few to thousand atoms at a relatively low cost.

  12. Denoising techniques combined to Monte Carlo simulations for the prediction of high-resolution portal images in radiotherapy treatment verification

    NASA Astrophysics Data System (ADS)

    Lazaro, D.; Barat, E.; Le Loirec, C.; Dautremer, T.; Montagu, T.; Guérin, L.; Batalla, A.

    2013-05-01

    This work investigates the possibility of combining Monte Carlo (MC) simulations to a denoising algorithm for the accurate prediction of images acquired using amorphous silicon (a-Si) electronic portal imaging devices (EPIDs). An accurate MC model of the Siemens OptiVue1000 EPID was first developed using the penelope code, integrating a non-uniform backscatter modelling. Two already existing denoising algorithms were then applied on simulated portal images, namely the iterative reduction of noise (IRON) method and the locally adaptive Savitzky-Golay (LASG) method. A third denoising method, based on a nonparametric Bayesian framework and called DPGLM (for Dirichlet process generalized linear model) was also developed. Performances of the IRON, LASG and DPGLM methods, in terms of smoothing capabilities and computation time, were compared for portal images computed for different values of the RMS pixel noise (up to 10%) in three different configurations, a heterogeneous phantom irradiated by a non-conformal 15 × 15 cm2 field, a conformal beam from a pelvis treatment plan, and an IMRT beam from a prostate treatment plan. For all configurations, DPGLM outperforms both IRON and LASG by providing better smoothing performances and demonstrating a better robustness with respect to noise. Additionally, no parameter tuning is required by DPGLM, which makes the denoising step very generic and easy to handle for any portal image. Concerning the computation time, the denoising of 1024 × 1024 images takes about 1 h 30 min, 2 h and 5 min using DPGLM, IRON, and LASG, respectively. This paper shows the feasibility to predict within a few hours and with the same resolution as real images accurate portal images, combining MC simulations with the DPGLM denoising algorithm.

  13. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with

  14. A Combined Density Functional Theory and Monte Carlo Study of Manganites for Magnetic Refrigeration

    NASA Astrophysics Data System (ADS)

    Korotana, Romi; Mallia, Giuseppe; Gercsi, Zsolt; Harrison, Nicholas

    2015-03-01

    Perovskite oxides are considered to be strong candidates for applications in magnetic refrigeration technology, due to their remarkable properties, in addition to low processing costs. Manganites with the general formula R1-xAxMnO3, particularly for A=Ca and 0 . 2 < x < 0 . 5 , undergo a field driven transition from a paramagnetic to ferromagnetic state, which is accompanied by changes in the lattice and electronic structure. Therefore, one may anticipate a large entropy change across the phase transition due to the first order nature. The present work aims to achieve an understanding of the relevant structural, magnetic, and electronic entropy contributions in the doped compound La0.75Ca0.25MnO3. A combination of thermodynamics and first principles theory is applied to determine individual contributions to the total entropy change of the system. Hybrid-exchange density functional (B3LYP) calculations for La0.75Ca0.25MnO3 predict an anti-Jahn-Teller polaron in the localised hole state, which is influenced by long-range cooperative Jahn-Teller distortions. Through the analysis of individual entropy contributions, it is identified that the electronic and vibrational terms have a deleterious effect on the total entropy change.

  15. Combined modulated electron and photon beams planned by a Monte-Carlo-based optimization procedure for accelerated partial breast irradiation.

    PubMed

    Palma, Bianey Atriana; Sánchez, Ana Ureba; Salguero, Francisco Javier; Arráns, Rafael; Sánchez, Carlos Míguez; Zurita, Amadeo Walls; Hermida, María Isabel Romero; Leal, Antonio

    2012-03-07

    The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (D(p)). Heart and ipsilateral lung receiving 5% D(p) and 15% D(p), respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% D(p) and 100% D(p) was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.

  16. Challenges of Monte Carlo Transport

    SciTech Connect

    Long, Alex Roberts

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  17. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  18. Combined Monte Carlo and quantum mechanics study of the hydration of the guanine-cytosine base pair.

    PubMed

    Coutinho, Kaline; Ludwig, Valdemir; Canuto, Sylvio

    2004-06-01

    We present a computer simulation study of the hydration of the guanine-cytosine (GC) hydrogen-bonded complex. Using first principles density-functional theory, with gradient-corrected exchange-correlation and Monte Carlo simulation, we include thermal contribution, structural effects, solvent polarization, and the water-water and water-GC hydrogen bond interaction to show that the GC interaction in an aqueous environment is weakened to about 70% of the value obtained for an isolated complex. We also analyze in detail the preferred hydration sites of the GC pair and show that on the average it makes around five hydrogen bonds with water.

  19. Combining ab initio computations, neural networks, and diffusion Monte Carlo: An efficient method to treat weakly bound molecules

    NASA Astrophysics Data System (ADS)

    Brown, David F. R.; Gibbs, Mark N.; Clary, David C.

    1996-11-01

    We describe a new method to calculate the vibrational ground state properties of weakly bound molecular systems and apply it to (HF)2 and HF-HCl. A Bayesian Inference neural network is used to fit an analytic function to a set of ab initio data points, which may then be employed by the quantum diffusion Monte Carlo method to produce ground state vibrational wave functions and properties. The method is general and relatively simple to implement and will be attractive for calculations on systems for which no analytic potential energy surface exists.

  20. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.

    2015-12-01

    We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  1. Mesoscale simulation of polymer reaction equilibrium: Combining dissipative particle dynamics with reaction ensemble Monte Carlo. II. Supramolecular diblock copolymers

    NASA Astrophysics Data System (ADS)

    Lísal, Martin; Brennan, John K.; Smith, William R.

    2009-03-01

    We present an alternative formulation of the reaction ensemble dissipative particle dynamics (RxDPD) method [M. Lísal, J. K. Brennan, and W. R. Smith, J. Chem. Phys. 125, 16490 (2006)], a mesoscale simulation technique for studying polymer systems in reaction equilibrium. The RxDPD method combines elements of dissipative particle dynamics (DPD) and reaction ensemble Monte Carlo (RxMC), and is primarily targeted for the prediction of the system composition, thermodynamic properties, and phase behavior of reaction equilibrium polymer systems. The alternative formulation of the RxDPD method is demonstrated by considering a supramolecular diblock copolymer (SDC) melt in which two homopolymers, An and Bm, can reversibly bond at terminal binding sites to form a diblock copolymer, AnBm. We consider the effect of the terminal binding sites and the chemical incompatibility between A- and B-segments on the phase behavior. Both effects are found to strongly influence the resulting phase behavior. Due to the reversible nature of the binding, the SDC melt can be treated as the reaction equilibrium system An+Bm⇌AnBm. To simulate the An+Bm⇌AnBm melt, the system contains, in addition to full An, Bm, and AnBm polymers, two fractional polymers: one fractional polymer either fAn or fBm, and one fractional polymer fAnBm, which have fractional particles at the ends of the polymer chains. These fractional particles are coupled to the system via a coupling parameter. The time evolution of the system is governed by the DPD equations of motion, accompanied by random changes in the coupling parameter. Random changes in the coupling parameter mimic forward and reverse reaction steps as in the RxMC approach, and they are accepted with a probability derived from the expanded ensemble grand canonical partition function. Unlike the original RxDPD method that considers coupling of entire fractional polymers to the system, the expanded ensemble framework allows a stepwise coupling, thus

  2. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  3. Combining the diffusion approximation and Monte Carlo modeling in analysis of diffuse reflectance spectra from human skin

    NASA Astrophysics Data System (ADS)

    Naglič, Peter; Vidovič, Luka; Milanič, Matija; Randeberg, Lise L.; Majaron, Boris

    2014-03-01

    Light propagation in highly scattering biological tissues is often treated in the so-called diffusion approximation (DA). Although the analytical solutions derived within the DA are known to be inaccurate near tissue boundaries and absorbing layers, their use in quantitative analysis of diffuse reflectance spectra (DRS) is quite common. We analyze the artifacts in assessed tissue properties which occur in fitting of numerically simulated DRS with the DA solutions for a three-layer skin model. In addition, we introduce an original procedure which significantly improves the accuracy of such an inverse analysis of DRS. This procedure involves a single comparison run of a Monte Carlo (MC) numerical model, yet avoids the need to implement and run an inverse MC. This approach is tested also in analysis of experimental DRS from human skin.

  4. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  5. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  6. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  7. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Lazopoulos, Achilleas

    2006-07-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  8. Monte Carlo docking with ubiquitin.

    PubMed Central

    Cummings, M. D.; Hart, T. N.; Read, R. J.

    1995-01-01

    The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as

  9. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  10. CosmoMC: Cosmological MonteCarlo

    NASA Astrophysics Data System (ADS)

    Lewis, Antony; Bridle, Sarah

    2011-06-01

    We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

  11. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGES

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  12. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  13. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  14. Risk Assessment and Prediction of Flyrock Distance by Combined Multiple Regression Analysis and Monte Carlo Simulation of Quarry Blasting

    NASA Astrophysics Data System (ADS)

    Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh

    2016-09-01

    Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.

  15. Combined Monte Carlo and molecular dynamics simulation of hydrated 18:0 sphingomyelin-cholesterol lipid bilayers

    NASA Astrophysics Data System (ADS)

    Khelashvili, George A.; Scott, H. L.

    2004-05-01

    We have carried out atomic level molecular dynamics and Monte Carlo simulations of hydrated 18:0 sphingomyelin (SM)-cholesterol (CHOL) bilayers at temperatures of 20 and 50 °C. The simulated systems each contained 266 SM, 122 CHOL, and 11861 water molecules. Each simulation was run for 10 ns under semi-isotropic pressure boundary conditions. The particle-mesh Ewald method was used for long-range electrostatic interactions. Properties of the systems were calculated over the final 3 ns. We compare the properties of 20 and 50 °C bilayer systems with each other, with experimental data, and with experimental and simulated properties of pure SM bilayers and dipalmitoyl phospatidyl choline (DPPC)-CHOL bilayers. The simulations reveal an overall similarity of both systems, despite the 30 °C temperature difference which brackets the pure SM main phase transition. The area per molecule, lipid chain order parameter profiles, atom distributions, and electron density profiles are all very similar for the two simulated systems. Consistent with simulations from our lab and others, we find strong intramolecular hydrogen bonding in SM molecules between the phosphate ester oxygen and the hydroxyl hydrogen atoms. We also find that cholesterol hydroxyl groups tend to form hydrogen bonds primarily with SM carbonyl, methyl, and amide moieties and to a lesser extent methyl and hydroxyl oxygens.

  16. Self-learning Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    2017-01-01

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  17. Adiabatic optimization versus diffusion Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  18. Modeling the reflectance of the lunar regolith by a new method combining Monte Carlo Ray tracing and Hapke's model with application to Chang'E-1 IIM data.

    PubMed

    Wong, Un-Hong; Wu, Yunzhao; Wong, Hon-Cheng; Liang, Yanyan; Tang, Zesheng

    2014-01-01

    In this paper, we model the reflectance of the lunar regolith by a new method combining Monte Carlo ray tracing and Hapke's model. The existing modeling methods exploit either a radiative transfer model or a geometric optical model. However, the measured data from an Interference Imaging spectrometer (IIM) on an orbiter were affected not only by the composition of minerals but also by the environmental factors. These factors cannot be well addressed by a single model alone. Our method implemented Monte Carlo ray tracing for simulating the large-scale effects such as the reflection of topography of the lunar soil and Hapke's model for calculating the reflection intensity of the internal scattering effects of particles of the lunar soil. Therefore, both the large-scale and microscale effects are considered in our method, providing a more accurate modeling of the reflectance of the lunar regolith. Simulation results using the Lunar Soil Characterization Consortium (LSCC) data and Chang'E-1 elevation map show that our method is effective and useful. We have also applied our method to Chang'E-1 IIM data for removing the influence of lunar topography to the reflectance of the lunar soil and to generate more realistic visualizations of the lunar surface.

  19. The Ehrlich-Schwoebel barrier on an oxide surface: a combined Monte-Carlo and in situ scanning tunneling microscopy approach.

    PubMed

    Gianfrancesco, Anthony G; Tselev, Alexander; Baddorf, Arthur P; Kalinin, Sergei V; Vasudevan, Rama K

    2015-11-13

    The controlled growth of epitaxial films of complex oxides requires an atomistic understanding of key parameters determining final film morphology, such as termination dependence on adatom diffusion, and height of the Ehrlich-Schwoebel (ES) barrier. Here, through an in situ scanning tunneling microscopy study of mixed-terminated La5/8Ca3/8MnO3 (LCMO) films, we image adatoms and observe pile-up at island edges. Image analysis allows determination of the population of adatoms at the edge of islands and fractions on A-site and B-site terminations. A simple Monte-Carlo model, simulating the random walk of adatoms on a sinusoidal potential landscape using Boltzmann statistics is used to reproduce the experimental data, and provides an estimate of the ES barrier as ∼0.18 ± 0.04 eV at T = 1023 K, similar to those of metal adatoms on metallic surfaces. These studies highlight the utility of in situ imaging, in combination with basic Monte-Carlo methods, in elucidating the factors which control the final film growth in complex oxides.

  20. Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations

    SciTech Connect

    Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C

    2011-01-01

    It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.

  1. Monte Carlo inversion of seismic data

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.

  2. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  3. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  4. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  5. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  6. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  7. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  8. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  9. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  10. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  11. Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.

    PubMed

    Hinrichs, A; Reich, T

    2001-01-01

    We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  13. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  14. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  15. Monte Carlo studies of uranium calorimetry

    SciTech Connect

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.

  16. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  17. Search and Rescue Monte Carlo Simulation.

    DTIC Science & Technology

    1985-03-01

    confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)

  18. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  19. Monte Carlo studies of ARA detector optimization

    NASA Astrophysics Data System (ADS)

    Stockham, Jessica

    2013-04-01

    The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.

  20. Rapid calculation of diffuse reflectance from a multilayered model by combination of the white Monte Carlo and adding-doubling methods

    PubMed Central

    Yoshida, Kenichiro; Nishidate, Izumi

    2014-01-01

    To rapidly derive a result for diffuse reflectance from a multilayered model that is equivalent to that of a Monte-Carlo simulation (MCS), we propose a combination of a layered white MCS and the adding-doubling method. For slabs with various scattering coefficients assuming a certain anisotropy factor and without absorption, we calculate the transition matrices for light flow with respect to the incident and exit angles. From this series of precalculated transition matrices, we can calculate the transition matrices for the multilayered model with the specific anisotropy factor. The relative errors of the results of this method compared to a conventional MCS were less than 1%. We successfully used this method to estimate the chromophore concentration from the reflectance spectrum of a numerical model of skin and in vivo human skin tissue. PMID:25426319

  1. MontePython: Implementing Quantum Monte Carlo using Python

    NASA Astrophysics Data System (ADS)

    Nilsen, Jon Kristian

    2007-11-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

  2. Reaction mechanism and tautomeric equilibrium of 2-mercaptopyrimidine in the gas phase and in aqueous solution: a combined Monte Carlo and quantum mechanics study.

    PubMed

    Lima, Maria Carolina P; Coutinho, Kaline; Canuto, Sylvio; Rocha, Willian R

    2006-06-08

    A combined Monte Carlo and quantum mechanical study was carried out to analyze the tautomeric equilibrium of 2-mercaptopyrimidine in the gas phase and in aqueous solution. Second- and fourth-order Møller-Plesset perturbation theory calculations indicate that in the gas phase thiol (Pym-SH) is more stable than the thione (Pym-NH) by ca. 8 kcal/mol. In aqueous solution, thermodynamic perturbation theory implemented on a Monte Carlo NpT simulation indicates that both the differential enthalpy and Gibbs free energy favor the thione form. The calculated differential enthalpy is DeltaH(SH)(-->)(NH)(solv) = -1.7 kcal/mol and the differential Gibbs free energy is DeltaG(SH)(-->)(NH)(solv) = -1.9 kcal/mol. Analysis is made of the contribution of the solute-solvent hydrogen bonds and it is noted that the SH group in the thiol and NH group in the thione tautomers act exclusively as a hydrogen bond donor in aqueous solution. The proton transfer reaction between the tautomeric forms was also investigated in the gas phase and in aqueous solution. Two distinct mechanisms were considered: a direct intramolecular transfer and a water-assisted mechanism. In the gas phase, the intramolecular transfer leads to a large energy barrier of 34.4 kcal/mol, passing through a three-center transition state. The proton transfer with the assistance of one water molecule decreases the energy barrier to 17.2 kcal/mol. In solution, these calculated activation barriers are, respectively, 32.0 and 14.8 kcal/mol. The solvent effect is found to be sizable but it is considerably more important as a participant in the water-assisted mechanism than the solvent field of the solute-solvent interaction. Finally, the calculated total Gibbs free energy is used to estimate the equilibrium constant.

  3. Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.

    PubMed

    Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo

    2014-10-14

    Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.

  4. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  5. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  6. Monte Carlo simulations of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  7. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  8. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  9. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  10. Fission Matrix Capability for MCNP Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brown, Forrest; Carney, Sean; Kiedrowski, Brian; Martin, William

    2014-06-01

    We describe recent experience and results from implementing a fission matrix capability into the MCNP Monte Carlo code. The fission matrix can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission neutron source distribution. It can also be used to accelerate the convergence of the power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used in MCNP to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. Past difficulties and limitations of the fission matrix approach are overcome with a new sparse representation of the matrix, permitting much larger and more accurate fission matrix representations. The new fission matrix capabilities provide a significant advance in the state-of-the-art for Monte Carlo criticality calculations.

  11. Quantum Monte Carlo applied to solids

    SciTech Connect

    Shulenburger, Luke; Mattsson, Thomas R.

    2013-12-01

    We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.

  12. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  13. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  14. Monte Carlo approach to Estrada index

    NASA Astrophysics Data System (ADS)

    Gutman, Ivan; Radenković, Slavko; Graovac, Ante; Plavšić, Dejan

    2007-09-01

    Let G be a graph on n vertices, and let λ1, λ2, …, λn be its eigenvalues. The Estrada index of G is a recently introduced molecular structure descriptor, defined as EE=∑i=1ne. Using a Monte Carlo approach, and treating the graph eigenvalues as random variables, we deduce approximate expressions for EE, in terms of the number of vertices and number of edges, of very high accuracy.

  15. Accelerated Monte Carlo by Embedded Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  16. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  17. Harnessing graphical structure in Markov chain Monte Carlo learning

    SciTech Connect

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  18. Theoretical study of the ammonia nitridation rate on an Fe (100) surface: A combined density functional theory and kinetic Monte Carlo study

    SciTech Connect

    Yeo, Sang Chul; Lee, Hyuck Mo; Lo, Yu Chieh; Li, Ju

    2014-10-07

    Ammonia (NH{sub 3}) nitridation on an Fe surface was studied by combining density functional theory (DFT) and kinetic Monte Carlo (kMC) calculations. A DFT calculation was performed to obtain the energy barriers (E{sub b}) of the relevant elementary processes. The full mechanism of the exact reaction path was divided into five steps (adsorption, dissociation, surface migration, penetration, and diffusion) on an Fe (100) surface pre-covered with nitrogen. The energy barrier (E{sub b}) depended on the N surface coverage. The DFT results were subsequently employed as a database for the kMC simulations. We then evaluated the NH{sub 3} nitridation rate on the N pre-covered Fe surface. To determine the conditions necessary for a rapid NH{sub 3} nitridation rate, the eight reaction events were considered in the kMC simulations: adsorption, desorption, dissociation, reverse dissociation, surface migration, penetration, reverse penetration, and diffusion. This study provides a real-time-scale simulation of NH{sub 3} nitridation influenced by nitrogen surface coverage that allowed us to theoretically determine a nitrogen coverage (0.56 ML) suitable for rapid NH{sub 3} nitridation. In this way, we were able to reveal the coverage dependence of the nitridation reaction using the combined DFT and kMC simulations.

  19. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  20. Four decades of implicit Monte Carlo

    SciTech Connect

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate forms of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.

  1. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  2. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  3. Mammography X-Ray Spectra Simulated with Monte Carlo

    SciTech Connect

    Vega-Carrillo, H. R.; Gonzalez, J. Ramirez; Manzanares-Acuna, E.; Hernandez-Davila, V. M.; Villasana, R. Hernandez; Mercado, G. A.

    2008-08-11

    Monte Carlo calculations have been carried out to obtain the x-ray spectra of various target-filter combinations for a mammography unit. Mammography is widely used to diagnose breast cancer. Further to Mo target with Mo filter combination, Rh/Rh, Mo/Rh, Mo/Al, Rh/Al, and W/Rh are also utilized. In this work Monte Carlo calculations, using MCNP 4C code, were carried out to estimate the x-ray spectra produced when a beam of 28 keV electrons did collide with Mo, Rh and W targets. Resulting x-ray spectra show characteristic x-rays and continuous bremsstrahlung. Spectra were also calculated including filters.

  4. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.

  5. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  6. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  7. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  8. Monte Carlo calculations for r-process nucleosynthesis

    SciTech Connect

    Mumpower, Matthew Ryan

    2015-11-12

    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  9. A combined XRF/Monte Carlo simulation study of multilayered Peruvian metal artifacts from the tomb of the Priestess of Chornancap

    NASA Astrophysics Data System (ADS)

    Brunetti, Antonio; Fabian, Julio; La Torre, Carlos Wester; Schiavon, Nick

    2016-06-01

    An innovative methodological approach based on XRF measurements using a polychromatic X-ray beam combined with simulation tests based on an ultra-fast custom-made Monte Carlo code has been used to characterize the bulk chemical composition of restored (i.e., cleaned) and unrestored multilayered Peruvian metallic artifacts belonging to the twelfth- and thirteenth-century AD funerary complex of Chornancap-Chotuna in northern Peru. The multilayered structure was represented by a metal substrate covered by surface corrosion patinas and/or a layer from past protective treatments. The aim of the study was to assess whether this new approach could be used to overcome some of the limitations highlighted in previous research performed using monochromatic X-ray beam on patina-free and protective treatment-free metal artifacts in obtaining reliable data both on the composition on the bulk metals and on surface layers thickness. Results from the analytical campaign have led to a reformulation of previous hypotheses about the structure and composition of the metal used to create the Peruvian artifacts under investigation.

  10. Self-Assembly of AB Diblock Copolymer Confined in a Soft Nano-Droplet: A Combination Study by Monte Carlo Simulation and Experiment.

    PubMed

    Yan, Nan; Zhu, Yutian; Jiang, Wei

    2016-11-23

    The self-assembly of AB-type diblock copolymers confined in a three-dimensional (3D) soft nanodroplet is investigated by the combination of Monte Carlo simulation and experiment. The influences of two critical factors, i.e., confinement degree of the imposed confinement space and the interfacial interaction between each individual block and boundary interface, on the 3D soft confined self-assembly are examined systematically. The simulation results reveal that block copolymer chains become more and more folded as the confinement degree (it can be monitored by the ratio of D/L, where L is the length of polymer chain and D is the reduced diameter of the final polymeric particle) is enhanced, causing a series of morphological transitions. Based on the simulation prediction, we perform the corresponding experiments by the 3D confined self-assembly of both symmetric and asymmetric block copolymers within the emulsion droplets. The experimental results well reproduce the confinement degree induced morphological transitions predicted by the simulations, such as the transition from segmented pupa-like particle to hamburger particle and the transition from raspberry-like particle to triangle-like particle, and then to hamburger particle. The current study implies that self-assembled nanostructures under 3D soft confinement can be simply controlled by tuning the confinement degree and interfacial property, i.e., the ratio of D/L and the interfacial interaction between each individual block and boundary interface.

  11. Combined ab initio and kinetic Monte Carlo simulations of C diffusion on the √(3)×√(3) β-SiC (111) surface

    NASA Astrophysics Data System (ADS)

    Righi, M. C.; Pignedoli, C. A.; di Felice, R.; Bertoni, C. M.; Catellani, A.

    2005-02-01

    We investigate the kinetic behavior of a single C adatom on the 3×3 β-SiC(111) surface by means of combined ab initio and kinetic Monte Carlo simulations. After identifying the metastable binding locations, we calculate the energy barriers the adatom must overcome when jumping among them. The presence of the 3×3 reconstruction creates considerable differences among the diffusion mechanisms that can be thermally activated. This has important implications for the C mobility on the surface, and therefore for SiC growth. The kinetic simulation at realistic temperatures and time scales revealed that C diffusion occurs mostly around the Si adatoms forming the 3×3 reconstruction. A reduced adatom mobility, as observed in many studies of surfactant-mediated growth, can favor the formation of a high density of nuclei, and thus promote a layer-by-layer growth. As a further result of the kinetic simulation we obtained the adatom diffusion coefficient, a macroscopic quantity accessible in experiments.

  12. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  13. MBR Monte Carlo Simulation in PYTHIA8

    NASA Astrophysics Data System (ADS)

    Ciesielski, R.

    We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.

  14. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  15. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  16. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  17. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  18. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  19. Quantum Monte Carlo calculations for light nuclei

    SciTech Connect

    Wiringa, R.B.

    1998-08-01

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  20. Introduction to Cluster Monte Carlo Algorithms

    NASA Astrophysics Data System (ADS)

    Luijten, E.

    This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.

  1. Cluster hybrid Monte Carlo simulation algorithms.

    PubMed

    Plascak, J A; Ferrenberg, Alan M; Landau, D P

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  2. Cluster hybrid Monte Carlo simulation algorithms

    NASA Astrophysics Data System (ADS)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  3. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  4. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  5. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  6. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  7. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  8. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  9. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  10. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; ...

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  11. Geometrical Monte Carlo simulation of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Demet; Yuksel, Heba

    2013-09-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  13. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  14. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, J.; Gandolfi, S.; Pederiva, F.; ...

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  15. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  16. Quantum Monte Carlo methods for nuclear physics

    NASA Astrophysics Data System (ADS)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-07-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  17. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  18. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  19. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  20. Design of composite laminates by a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Fang, Chin; Springer, George S.

    1993-01-01

    A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.

  1. Monte Carlo analysis of satellite debris footprint dispersion

    NASA Technical Reports Server (NTRS)

    Rao, P. P.; Woeste, M. A.

    1979-01-01

    A comprehensive study is performed to investigate satellite debris impact point dispersion using a combination of Monte Carlo statistical analysis and parametric methods. The Monte Carlo technique accounts for nonlinearities in the entry point dispersion, which is represented by a covariance matrix of position and velocity errors. Because downrange distance of impact is a monotonic function of debris ballistic coefficient, a parametric method is useful for determining dispersion boundaries. The scheme is applied in the present analysis to estimate the Skylab footprint dispersions for a controlled reentry. A significant increase in the footprint dispersion is noticed for satellite breakup above a 200,000-ft altitude. A general discussion of the method used for analysis is presented together with some typical results obtained for the Skylab deboost mission, which was designed before NASA abandoned plans for a Skylab controlled reentry.

  2. Computer Monte Carlo simulation in quantitative resource estimation

    USGS Publications Warehouse

    Root, D.H.; Menzie, W.D.; Scott, W.A.

    1992-01-01

    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  3. Cluster Monte Carlo methods for the FePt Hamiltonian

    NASA Astrophysics Data System (ADS)

    Lyberatos, A.; Parker, G. J.

    2016-02-01

    Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.

  4. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  5. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  6. Monte Carlo-Minimization and Monte Carlo Recursion Approaches to Structure and Free Energy.

    NASA Astrophysics Data System (ADS)

    Li, Zhenqin

    1990-08-01

    Biological systems are intrinsically "complex", involving many degrees of freedom, heterogeneity, and strong interactions among components. For the simplest of biological substances, e.g., biomolecules, which obey the laws of thermodynamics, we may attempt a statistical mechanical investigational approach. Even for these simplest many -body systems, assuming microscopic interactions are completely known, current computational methods in characterizing the overall structure and free energy face the fundamental challenge of an exponential amount of computation, with the rise in the number of degrees of freedom. As an attempt to surmount such problems, two computational procedures, the Monte Carlo-minimization and Monte Carlo recursion methods, have been developed as general approaches to the determination of structure and free energy of a complex thermodynamic system. We describe, in Chapter 2, the Monte Carlo-minimization method, which attempts to simulate natural protein folding processes and to overcome the multiple-minima problem. The Monte Carlo-minimization procedure has been applied to a pentapeptide, Met-enkephalin, leading consistently to the lowest energy structure, which is most likely to be the global minimum structure for Met-enkephalin in the absence of water, given the ECEPP energy parameters. In Chapter 3 of this thesis, we develop a Monte Carlo recursion method to compute the free energy of a given physical system with known interactions, which has been applied to a 32-particle Lennard-Jones fluid. In Chapter 4, we describe an efficient implementation of the recursion procedure, for the computation of the free energy of liquid water, with both MCY and TIP4P potential parameters for water. As a further demonstration of the power of the recursion method for calculating free energy, a general formalism of cluster formation from monatomic vapor is developed in Chapter 5. The Gibbs free energy of constrained clusters can be computed efficiently using the

  7. Development of a Monte Carlo-Based Electron Beam Treatment Planning System: Clinical Application in Optimization of a Combined-Electron Technique for Treatment of Retinoblastoma.

    NASA Astrophysics Data System (ADS)

    Al-Beteri, Abdulkarim A.

    1990-06-01

    Development of a new three-dimensional Monte Carlo code for simulating electron transport in heterogeneous media for the purpose of electron-beam treatment planning is described. It involved the devising of improved mathematical representations for the probability distributions governing the processes of electron multiple-scattering and bremsstrahlung production. An efficient technique is used for random -sampling the probability distributions based on a modified acceptance-rejection sampling method that employs an envelope -type rejection function. In addition to predicting correct electron fluence differential in energy, angle, and both energy and angle at different depths in a variety of materials, the developed code is capable of predicting the following: (1) one-cubic-millimeter resolution electron absorbed-dose distributions in a heterogeneous phantom irradiated by electrons through circular, square, and rectangular fields at different SSD's; (2) perturbation patterns in the absorbed -dose distributions caused by the three most common perturbing agents encountered in the human body (body surface obliquity, bone heterogeneities, and air cavities); (3) dose enhancement anterior to a bone heterogeneity; (4) absorbed-dose perturbations caused by an air cavity adjacent to an irradiated volume but outside the radiation field. This study investigated the effect of heterogeneities on absorbed-dose distributions in phantoms irradiated by electron beams. Perturbation patterns are correctly predicted in depth-dose curves and in absorbed-dose distributions throughout an irradiated volume. For example, the code correctly predicts the doubling of the maximum absorbed dose along the beam central axis when a long narrow air cavity is centered in the field, and isodose level perturbations associated with surface obliquity and the presence of heterogeneities. The developed code is used in optimization of a combined-electron-beams irradiation technique for treatment of

  8. The design of a peptide sequence to inhibit HIV replication: a search algorithm combining Monte Carlo and self-consistent mean field techniques.

    PubMed

    Xiao, Xingqing; Hall, Carol K; Agris, Paul F

    2014-01-01

    We developed a search algorithm combining Monte Carlo (MC) and self-consistent mean field techniques to evolve a peptide sequence that has good binding capability to the anticodon stem and loop (ASL) of human lysine tRNA species, tRNA(Lys3), with the ultimate purpose of breaking the replication cycle of human immunodeficiency virus-1. The starting point is the 15-amino-acid sequence, RVTHHAFLGAHRTVG, found experimentally by Agris and co-workers to bind selectively to hypermodified tRNA(Lys3). The peptide backbone conformation is determined via atomistic simulation of the peptide-ASL(Lys3) complex and then held fixed throughout the search. The proportion of amino acids of various types (hydrophobic, polar, charged, etc.) is varied to mimic different peptide hydration properties. Three different sets of hydration properties were examined in the search algorithm to see how this affects evolution to the best-binding peptide sequences. Certain amino acids are commonly found at fixed sites for all three hydration states, some necessary for binding affinity and some necessary for binding specificity. Analysis of the binding structure and the various contributions to the binding energy shows that: 1) two hydrophilic residues (asparagine at site 11 and the cysteine at site 12) "recognize" the ASL(Lys3) due to the VDW energy, and thereby contribute to its binding specificity and 2) the positively charged arginines at sites 4 and 13 preferentially attract the negatively charged sugar rings and the phosphate linkages, and thereby contribute to the binding affinity.

  9. SU-E-T-256: Optimizing the Combination of Targeted Radionuclide Therapy Agents Using a Multi-Scale Patient-Specific Monte Carlo Dosimetry Platform

    SciTech Connect

    Besemer, A; Bednarz, B; Titz, B; Grudzinski, J; Weichert, J; Hall, L

    2014-06-01

    Purpose: Combination targeted radionuclide therapy (TRT) is appealing because it can potentially exploit different mechanisms of action from multiple radionuclides as well as the variable dose rates due to the different radionuclide half-lives. The work describes the development of a multiobjective optimization algorithm to calculate the optimal ratio of radionuclide injection activities for delivery of combination TRT. Methods: The ‘diapeutic’ (diagnostic and therapeutic) agent, CLR1404, was used as a proof-of-principle compound in this work. Isosteric iodine substitution in CLR1404 creates a molecular imaging agent when labeled with I-124 or a targeted radiotherapeutic agent when labeled with I-125 or I-131. PET/CT images of high grade glioma patients were acquired at 4.5, 24, and 48 hours post injection of 124I-CLR1404. The therapeutic 131I-CLR1404 and 125ICLR1404 absorbed dose (AD) and biological effective dose (BED) were calculated for each patient using a patient-specific Monte Carlo dosimetry platform. The optimal ratio of injection activities for each radionuclide was calculated with a multi-objective optimization algorithm using the weighted sum method. Objective functions such as the tumor dose heterogeneity and the ratio of the normal tissue to tumor doses were minimized and the relative importance weights of each optimization function were varied. Results: For each optimization function, the program outputs a Pareto surface map representing all possible combinations of radionuclide injection activities so that values that minimize the objective function can be visualized. A Pareto surface map of the weighted sum given a set of user-specified importance weights is also displayed. Additionally, the ratio of optimal injection activities as a function of the all possible importance weights is generated so that the user can select the optimal ratio based on the desired weights. Conclusion: Multi-objective optimization of radionuclide injection activities

  10. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  11. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  12. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  13. Monte Carlo calculation for microplanar beam radiography.

    PubMed

    Company, F Z; Allen, B J; Mino, C

    2000-09-01

    In radiography the scattered radiation from the off-target region decreases the contrast of the target image. We propose that a bundle of collimated, closely spaced, microplanar beams can reduce the scattered radiation and eliminate the effect of secondary electron dose, thus increasing the image dose contrast in the detector. The lateral and depth dose distributions of 20-200 keV microplanar beams are investigated using the EGS4 Monte Carlo code to calculate the depth doses and dose profiles in a 6 cm x 6 cm x 6 cm tissue phantom. The maximum dose on the primary beam axis (peak) and the minimum inter-beam scattered dose (valley) are compared at different photon energies and the optimum energy range for microbeam radiography is found. Results show that a bundle of closely spaced microplanar beams can give superior contrast imaging to a single macrobeam of the same overall area.

  14. Lunar Regolith Albedos Using Monte Carlos

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  15. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  16. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well.

  17. Hybrid algorithms in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.

    2012-12-01

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  18. Monte Carlo applications to acoustical field solutions

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.; Thanedar, B. D.

    1973-01-01

    The Monte Carlo technique is proposed for the determination of the acoustical pressure-time history at chosen points in a partial enclosure, the central idea of this technique being the tracing of acoustical rays. A statistical model is formulated and an algorithm for pressure is developed, the conformity of which is examined by two approaches and is shown to give the known results. The concepts that are developed are applied to the determination of the transient field due to a sound source in a homogeneous medium in a rectangular enclosure with perfect reflecting walls, and the results are compared with those presented by Mintzer based on the Laplace transform approach, as well as with a normal mode solution.

  19. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  20. Green's function Monte Carlo in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    We review the status of Green's Function Monte Carlo (GFMC) methods as applied to problems in nuclear physics. New methods have been developed to handle the spin and isospin degrees of freedom that are a vital part of any realistic nuclear physics problem, whether at the level of quarks or nucleons. We discuss these methods and then summarize results obtained recently for light nuclei, including ground state energies, three-body forces, charge form factors and the coulomb sum. As an illustration of the applicability of GFMC to quark models, we also consider the possible existence of bound exotic multi-quark states within the framework of flux-tube quark models. 44 refs., 8 figs., 1 tab.

  1. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  2. Parallel tempering Monte Carlo in LAMMPS.

    SciTech Connect

    Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.

    2003-11-01

    We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.

  3. Geometric Monte Carlo and black Janus geometries

    NASA Astrophysics Data System (ADS)

    Bak, Dongsu; Kim, Chanju; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2017-04-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  4. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  5. Markov Chain Monte Carlo from Lagrangian Dynamics

    PubMed Central

    Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark

    2014-01-01

    Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515

  6. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  7. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  8. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  9. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  10. Morphological evolution of growing crystals - A Monte Carlo simulation

    NASA Technical Reports Server (NTRS)

    Xiao, Rong-Fu; Alexander, J. Iwan D.; Rosenberger, Franz

    1988-01-01

    The combined effects of nutrient diffusion and surface kinetics on the crystal morphology were investigated using a Monte Carlo model to simulate the evolving morphology of a crystal growing from a two-component gaseous nutrient phase. The model combines nutrient diffusion, based on a modified diffusion-limited aggregation process, with anisotropic surface-attachment kinetics and surface diffusion. A variety of conditions, ranging from kinetic-controlled to diffusion-controlled growth, were examined. Successive transitions from compact faceted (dominant surface kinetics) to open dendritic morphologies (dominant volume diffusion) were obtained.

  11. Monte Carlo scatter correction for SPECT

    NASA Astrophysics Data System (ADS)

    Liu, Zemei

    The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.

  12. Parton shower Monte Carlo event generators

    NASA Astrophysics Data System (ADS)

    Webber, Bryan

    2011-12-01

    A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.

  13. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  14. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  15. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    NASA Astrophysics Data System (ADS)

    Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.

    2017-03-01

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  16. Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.

  17. Monte Carlo Estimate to Improve Photon Energy Spectrum Reconstruction

    NASA Astrophysics Data System (ADS)

    Sawchuk, S.

    Improvements to planning radiation treatment for cancer patients and quality control of medical linear accelerators (linacs) can be achieved with the explicit knowledge of the photon energy spectrum. Monte Carlo (MC) simulations of linac treatment heads and experimental attenuation analysis are among the most popular ways of obtaining these spectra. Attenuation methods which combine measurements under narrow beam geometry and the associated calculation techniques to reconstruct the spectrum from the acquired data are very practical in a clinical setting and they can also serve to validate MC simulations. A novel reconstruction method [1] which has been modified [2] utilizes a Simpson's rule (SR) to approximate and discretize (1)

  18. Positronic molecule calculations using Monte Carlo configuration interaction

    NASA Astrophysics Data System (ADS)

    Coe, Jeremy P.; Paterson, Martin J.

    2016-02-01

    We modify the Monte Carlo configuration interaction procedure to model atoms and molecules combined with a positron. We test this method with standard quantum chemistry basis sets on a number of positronic systems and compare results with the literature and full configuration interaction when appropriate. We consider positronium hydride, positronium hydroxide, lithium positride and a positron interacting with lithium, magnesium or lithium hydride. We demonstrate that we can capture much of the full configuration interaction results, but often require less than 10% of the configurations of these multireference wavefunctions. The effect of the number of frozen orbitals is also discussed.

  19. Sequential Monte Carlo hydraulic state estimation of an irrigation canal

    NASA Astrophysics Data System (ADS)

    Sau, Jacques; Malaterre, Pierre-Olivier; Baume, Jean-Pierre

    2010-04-01

    The estimation in real time of the hydraulic state of irrigation canals is becoming one of the major concerns of network managers. With this end in view, this Note presents a new approach based on the combination of a numerical solution of the open channel Saint-Venant PDE with a sequential Monte Carlo state-space estimation. We shall show that discharges and elevations along the canal are successfully estimated, and also that, concurrently, model parameters identification, such as the Manning-Strickler friction coefficient, can be performed.

  20. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  1. On-surface synthesis of two-dimensional imine polymers with a tunable band gap: a combined STM, DFT and Monte Carlo investigation

    NASA Astrophysics Data System (ADS)

    Xu, Lirong; Yu, Yanxia; Lin, Jianbin; Zhou, Xin; Tian, Wei Quan; Nieckarz, Damian; Szabelski, Pawel; Lei, Shengbin

    2016-04-01

    Two-dimensional polymers are of great interest for many potential applications in nanotechnology. The preparation of crystalline 2D polymers with a tunable band gap is critical for their applications in nano-electronics and optoelectronics. In this work, we try to tune the band gap of 2D imine polymers by expanding the conjugation of the backbone of aromatic diamines both laterally and longitudinally. STM characterization reveals that the regularity of the 2D polymers can be affected by the existence of lateral bulky groups. Density functional theory (DFT) simulations discovered a significant narrowing of the band gap of imine 2D polymers upon the expansion of the conjugation of the monomer backbone, which has been confirmed experimentally by UV absorption measurements. Monte Carlo simulations help us to gain further insight into the controlling factors of the formation of regular 2D polymers, which demonstrated that based on the all rigid assumption, the coexistence of different conformations of the imine moiety has a significant effect on the regularity of the imine 2D polymers.Two-dimensional polymers are of great interest for many potential applications in nanotechnology. The preparation of crystalline 2D polymers with a tunable band gap is critical for their applications in nano-electronics and optoelectronics. In this work, we try to tune the band gap of 2D imine polymers by expanding the conjugation of the backbone of aromatic diamines both laterally and longitudinally. STM characterization reveals that the regularity of the 2D polymers can be affected by the existence of lateral bulky groups. Density functional theory (DFT) simulations discovered a significant narrowing of the band gap of imine 2D polymers upon the expansion of the conjugation of the monomer backbone, which has been confirmed experimentally by UV absorption measurements. Monte Carlo simulations help us to gain further insight into the controlling factors of the formation of regular 2D

  2. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    SciTech Connect

    Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; Detmold, William; Pochinsky, Andrew V.

    2015-12-29

    In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  3. Sequential Monte-Carlo Based Framework for Dynamic Data-Driven Event Reconstruction for Atmospheric Release

    SciTech Connect

    Johannesson, G; Chow, F K; Glascoe, L; Glaser, R E; Hanley, W G; Kosovic, B; Krnjajic, M; Larsen, S C; Lundquist, J K; Mirin, A A; Nitao, J J; Sugiyama, G A

    2005-11-16

    Atmospheric releases of hazardous materials are highly effective means to impact large populations. We propose an atmospheric event reconstruction framework that couples observed data and predictive computer-intensive dispersion models via Bayesian methodology. Due to the complexity of the model framework, a sampling-based approach is taken for posterior inference that combines Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) strategies.

  4. Monte Carlo simulation of chromatin stretching

    NASA Astrophysics Data System (ADS)

    Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.

  5. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  6. Classical Trajectory and Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Olson, Ronald

    The classical trajectory Monte Carlo (CTMC) method originated with Hirschfelder, who studied the H + D2 exchange reaction using a mechanical calculator [58.1]. With the availability of computers, the CTMC method was actively applied to a large number of chemical systems to determine reaction rates, and final state vibrational and rotational populations (see, e.g., Karplus et al. [58.2]). For atomic physics problems, a major step was introduced by Abrines and Percival [58.3] who employed Kepler's equations and the Bohr-Sommerfield model for atomic hydrogen to investigate electron capture and ionization for intermediate velocity collisions of H+ + H. An excellent description is given by Percival and Richards [58.4]. The CTMC method has a wide range of applicability to strongly-coupled systems, such as collisions by multiply-charged ions [58.5]. In such systems, perturbation methods fail, and basis set limitations of coupled-channel molecular- and atomic-orbital techniques have difficulty in representing the multitude of activeexcitation, electron capture, and ionization channels. Vector- and parallel-processors now allow increasingly detailed study of the dynamics of the heavy projectile and target, along with the active electrons.

  7. Monte Carlo Simulation of Surface Reactions

    NASA Astrophysics Data System (ADS)

    Brosilow, Benjamin J.

    A Monte-Carlo study of the catalytic reaction of CO and O_2 over transition metal surfaces is presented, using generalizations of a model proposed by Ziff, Gulari and Barshad (ZGB). A new "constant -coverage" algorithm is described and applied to the model in order to elucidate the behavior near the model's first -order transition, and to draw an analogy between this transition and first-order phase transitions in equilibrium systems. The behavior of the model is then compared to the behavior of CO oxidation systems over Pt single-crystal catalysts. This comparison leads to the introduction of a new variation of the model in which one of the reacting species requires a large ensemble of vacant surface sites in order to adsorb. Further, it is shown that precursor adsorption and an effective Eley-Rideal mechanism must also be included in the model in order to obtain detailed agreement with experiment. Finally, variations of the model on finite and two component lattices are studied as models for low temperature CO oxidation over Noble Metal/Reducible Oxide and alloy catalysts.

  8. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  9. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  10. Europium Structural Effect on a Borosilicate Glass of Nuclear Interest: Combining Experimental Techniques with Reverse Monte Carlo Modelling to Investigate Short to Medium Range Order

    NASA Astrophysics Data System (ADS)

    Bouty, O.; Delaye, J. M.; Peuget, S.; Charpentier, T.

    In-depth understanding of the effects of actinides in borosilicate glass matrices used for nuclear waste disposal is of great importance for nuclear spent fuel reprocessing cycle and fission products immobilization. This work carried out on ternary simplified glasses (Si, B, Na) doped respectively with 1 mol. % and 3.85 mol. % europium, presents a comprehensive study on the behaviour of trivalent europium taken as a surrogate of trivalent actinides. Neutron scattering, Wide Angle X- ray Scattering, Nuclear Magnetic Resonance, Raman Spectroscopy and Reverse Monte Carlo simulations were performed. For both glasses, it was found that europium coordination number was around 6 ± 0.2, revealing an octahedral spatial configuration. Europium species accommodates in both silicate and borate site distributions but preferentially in the silicate network. Europium induces a IVB/IIIB ratio decrease and a silicate network polymerization according to NMR 29Si chemical shift and Raman spectra evolution.

  11. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    SciTech Connect

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  12. Anisotropic seismic inversion using a multigrid Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.

    2010-10-01

    We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (ɛ, δ). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (ɛ, δ) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ɛ parameter is particularly robust compared to that of slowness and the Thomsen δ parameter, even in the face of complex subsurface anomalies. The Thomsen ɛ and δ parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ɛ and δ in the MGMC scheme, inverted images of phase velocity reflect the integrated

  13. Properties of reactive oxygen species by quantum Monte Carlo.

    PubMed

    Zen, Andrea; Trout, Bernhardt L; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N(3) - N(4), where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  14. Properties of reactive oxygen species by quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  15. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    NASA Astrophysics Data System (ADS)

    Anderson, Amos Gerald

    2010-06-01

    combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.

  16. Properties of reactive oxygen species by quantum Monte Carlo

    SciTech Connect

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo

    2014-07-07

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  17. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  18. Application of Monte Carlo Methods in Molecular Targeted Radionuclide Therapy

    SciTech Connect

    Hartmann Siantar, C; Descalle, M-A; DeNardo, G L; Nigg, D W

    2002-02-19

    Targeted radionuclide therapy promises to expand the role of radiation beyond the treatment of localized tumors. This novel form of therapy targets metastatic cancers by combining radioactive isotopes with tumor-seeking molecules such as monoclonal antibodies and custom-designed synthetic agents. Ultimately, like conventional radiotherapy, the effectiveness of targeted radionuclide therapy is limited by the maximum dose that can be given to a critical, normal tissue, such as bone marrow, kidneys, and lungs. Because radionuclide therapy relies on biological delivery of radiation, its optimization and characterization are necessarily different than for conventional radiation therapy. We have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA treatment planning system. This system calculates patient-specific radiation dose estimates using a set of computed tomography scans to describe the 3D patient anatomy, combined with 2D (planar image) and 3D (SPECT, or single photon emission computed tomography) to describe the time-dependent radiation source. The accuracy of such a dose calculation is limited primarily by the accuracy of the initial radiation source distribution, overlaid on the patient's anatomy. This presentation provides an overview of MINERVA functionality for molecular targeted radiation therapy, and describes early validation and implementation results of Monte Carlo simulations.

  19. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    PubMed

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  20. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  1. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  2. Monte Carlo simulations: Hidden errors from ``good'' random number generators

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna

    1992-12-01

    The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

  3. Monte Carlo next-event estimates from thermal collisions

    SciTech Connect

    Hendricks, J.S.; Prael, R.E.

    1990-01-01

    A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in Monte Carlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production Monte Carlo neutron and photon radiation transport code MCNP. 9 refs.

  4. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    DTIC Science & Technology

    2011-03-01

    number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic

  5. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  6. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  7. CosmoPMC: Cosmology sampling with Population Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren

    2012-12-01

    CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

  8. Green's function Monte Carlo calculations of /sup 4/He

    SciTech Connect

    Carlson, J.A.

    1988-01-01

    Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.

  9. de Finetti Priors using Markov chain Monte Carlo computations.

    PubMed

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-07-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.

  10. de Finetti Priors using Markov chain Monte Carlo computations

    PubMed Central

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-01-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947

  11. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  12. Event-chain Monte Carlo for classical continuous spin models

    NASA Astrophysics Data System (ADS)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  13. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  14. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  15. Lattice Monte Carlo simulations of polymer melts

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-12-01

    We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.

  16. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  17. Monte Carlo implementation of polarized hadronization

    NASA Astrophysics Data System (ADS)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.

    2017-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  18. Quantum Monte Carlo with directed loops.

    PubMed

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  19. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  20. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    NASA Astrophysics Data System (ADS)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  1. Perturbation Monte Carlo methods for tissue structure alterations.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  2. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  3. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  4. A pure-sampling quantum Monte Carlo algorithm.

    PubMed

    Ospadov, Egor; Rothstein, Stuart M

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  5. Testing random number generators for Monte Carlo applications.

    PubMed

    Sim, L H; Nitschke, K N

    1993-03-01

    Central to any system for modelling radiation transport phenomena using Monte Carlo techniques is the method by which pseudo random numbers are generated. This method is commonly referred to as the Random Number Generator (RNG). It is usually a computer implemented mathematical algorithm which produces a series of numbers uniformly distributed on the interval [0,1). If this series satisfies certain statistical tests for randomness, then for practical purposes the pseudo random numbers in the series can be considered to be random. Tests of this nature are important not only for new RNGs but also to test the implementation of known RNG algorithms in different computer environments. Six RNGs have been tested using six statistical tests and one visual test. The statistical tests are the moments, frequency (digit and number), serial, gap, and poker tests. The visual test is a simple two dimensional ordered pair display. In addition the RNGs have been tested in a specific Monte Carlo application. This type of test is often overlooked, however it is important that in addition to satisfactory performance in statistical tests, the RNG be able to perform effectively in the applications of interest. The RNGs tested here are based on a variety of algorithms, including multiplicative and linear congruential, lagged Fibonacci, and combination arithmetic and lagged Fibonacci. The effect of the Bays-Durham shuffling algorithm on the output of a known "bad" RNG has also been investigated.

  6. Stochastic Kinetic Monte Carlo algorithms for long-range Hamiltonians

    SciTech Connect

    Mason, D R; Rudd, R E; Sutton, A P

    2003-10-13

    We present a higher order kinetic Monte Carlo methodology suitable to model the evolution of systems in which the transition rates are non- trivial to calculate or in which Monte Carlo moves are likely to be non- productive flicker events. The second order residence time algorithm first introduced by Athenes et al.[1] is rederived from the n-fold way algorithm of Bortz et al.[2] as a fully stochastic algorithm. The second order algorithm can be dynamically called when necessary to eliminate unproductive flickering between a metastable state and its neighbors. An algorithm combining elements of the first order and second order methods is shown to be more efficient, in terms of the number of rate calculations, than the first order or second order methods alone while remaining statistically identical. This efficiency is of prime importance when dealing with computationally expensive rate functions such as those arising from long- range Hamiltonians. Our algorithm has been developed for use when considering simulations of vacancy diffusion under the influence of elastic stress fields. We demonstrate the improved efficiency of the method over that of the n-fold way in simulations of vacancy diffusion in alloys. Our algorithm is seen to be an order of magnitude more efficient than the n-fold way in these simulations. We show that when magnesium is added to an Al-2at.%Cu alloy, this has the effect of trapping vacancies. When trapping occurs, we see that our algorithm performs thousands of events for each rate calculation performed.

  7. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  8. Monte Carlo dose verification for intensity-modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Li, X. Allen; Ma, Lijun; Naqvi, Shahid; Shih, Rompin; Yu, Cedric

    2001-09-01

    Intensity-modulated arc therapy (IMAT), a technique which combines beam rotation and dynamic multileaf collimation, has been implemented in our clinic. Dosimetric errors can be created by the inability of the planning system to accurately account for the effects of tissue inhomogeneities and physical characteristics of the multileaf collimator (MLC). The objective of this study is to explore the use of Monte Carlo (MC) simulation for IMAT dose verification. The BEAM/DOSXYZ Monte Carlo system was implemented to perform dose verification for the IMAT treatment. The implementation includes the simulation of the linac head/MLC (Elekta SL20), the conversion of patient CT images and beam arrangement for 3D dose calculation, the calculation of gantry rotation and leaf motion by a series of static beams and the development of software to automate the entire MC process. The MC calculations were verified by measurements for conventional beam settings. The agreement was within 2%. The IMAT dose distributions generated by a commercial forward planning system (RenderPlan, Elekta) were compared with those calculated by the MC package. For the cases studied, discrepancies of over 10% were found between the MC and the RenderPlan dose calculations. These discrepancies were due in part to the inaccurate dose calculation of the RenderPlan system. The computation time for the IMAT MC calculation was in the range of 20-80 min on 15 Pentium-III computers. The MC method was also useful in verifying the beam apertures used in the IMAT treatments.

  9. A multi-scale Monte Carlo method for electrolytes

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xu, Zhenli; Xing, Xiangjun

    2015-08-01

    Artifacts arise in the simulations of electrolytes using periodic boundary conditions (PBCs). We show the origin of these artifacts are the periodic image charges and the constraint of charge neutrality inside the simulation box, both of which are unphysical from the view point of real systems. To cure these problems, we introduce a multi-scale Monte Carlo (MC) method, where ions inside a spherical cavity are simulated explicitly, while ions outside are treated implicitly using a continuum theory. Using the method of Debye charging, we explicitly derive the effective interactions between ions inside the cavity, arising due to the fluctuations of ions outside. We find that these effective interactions consist of two types: (1) a constant cavity potential due to the asymmetry of the electrolyte, and (2) a reaction potential that depends on the positions of all ions inside. Combining the grand canonical Monte Carlo (GCMC) with a recently developed fast algorithm based on image charge method, we perform a multi-scale MC simulation of symmetric electrolytes, and compare it with other simulation methods, including PBC + GCMC method, as well as large scale MC simulation. We demonstrate that our multi-scale MC method is capable of capturing the correct physics of a large system using a small scale simulation.

  10. Bayesian Monte Carlo and Maximum Likelihood Approach for ...

    EPA Pesticide Factsheets

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien

  11. A pure-sampling quantum Monte Carlo algorithm

    SciTech Connect

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  12. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    SciTech Connect

    Brown, Forrest B.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  13. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    SciTech Connect

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Loeffler, Frank; Schnetter, Erik

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  14. Scoring methods for implicit Monte Carlo radiation transport

    SciTech Connect

    Edwards, A.L.

    1981-01-01

    Analytical and numerical tests were made of a number of possible methods for scoring the energy exchange between radiation and matter in the implicit Monte Carlo (IMC) radiation transport scheme of Fleck and Cummings. The interactions considered were effective absorption, elastic scattering, and Compton scattering. The scoring methods tested were limited to simple combinations of analogue, linear expected value, and exponential expected value scoring. Only two scoring methods were found that produced the same results as a pure analogue method. These are a combination of exponential expected value absorption and deposition and analogue Compton scattering of the particle, with either linear expected value Compton deposition or analogue Compton deposition. In both methods, the collision distance is based on the total scattering cross section.

  15. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  16. Dynamic 99mTc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Sjögreen Gleisner, Katarina; Ljungberg, Michael

    2013-05-01

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for 99mTc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  17. Dynamic (99m)Tc-MAG3 renography: images for quality control obtained by combining pharmacokinetic modelling, an anthropomorphic computer phantom and Monte Carlo simulated scintillation camera imaging.

    PubMed

    Brolin, Gustav; Gleisner, Katarina Sjögreen; Ljungberg, Michael

    2013-05-21

    In dynamic renal scintigraphy, the main interest is the radiopharmaceutical redistribution as a function of time. Quality control (QC) of renal procedures often relies on phantom experiments to compare image-based results with the measurement setup. A phantom with a realistic anatomy and time-varying activity distribution is therefore desirable. This work describes a pharmacokinetic (PK) compartment model for (99m)Tc-MAG3, used for defining a dynamic whole-body activity distribution within a digital phantom (XCAT) for accurate Monte Carlo (MC)-based images for QC. Each phantom structure is assigned a time-activity curve provided by the PK model, employing parameter values consistent with MAG3 pharmacokinetics. This approach ensures that the total amount of tracer in the phantom is preserved between time points, and it allows for modifications of the pharmacokinetics in a controlled fashion. By adjusting parameter values in the PK model, different clinically realistic scenarios can be mimicked, regarding, e.g., the relative renal uptake and renal transit time. Using the MC code SIMIND, a complete set of renography images including effects of photon attenuation, scattering, limited spatial resolution and noise, are simulated. The obtained image data can be used to evaluate quantitative techniques and computer software in clinical renography.

  18. On-surface synthesis of two-dimensional imine polymers with a tunable band gap: a combined STM, DFT and Monte Carlo investigation.

    PubMed

    Xu, Lirong; Yu, Yanxia; Lin, Jianbin; Zhou, Xin; Tian, Wei Quan; Nieckarz, Damian; Szabelski, Pawel; Lei, Shengbin

    2016-04-28

    Two-dimensional polymers are of great interest for many potential applications in nanotechnology. The preparation of crystalline 2D polymers with a tunable band gap is critical for their applications in nano-electronics and optoelectronics. In this work, we try to tune the band gap of 2D imine polymers by expanding the conjugation of the backbone of aromatic diamines both laterally and longitudinally. STM characterization reveals that the regularity of the 2D polymers can be affected by the existence of lateral bulky groups. Density functional theory (DFT) simulations discovered a significant narrowing of the band gap of imine 2D polymers upon the expansion of the conjugation of the monomer backbone, which has been confirmed experimentally by UV absorption measurements. Monte Carlo simulations help us to gain further insight into the controlling factors of the formation of regular 2D polymers, which demonstrated that based on the all rigid assumption, the coexistence of different conformations of the imine moiety has a significant effect on the regularity of the imine 2D polymers.

  19. Monte Carlo analysis of transient electron transport in wurtzite Zn{sub 1−x}Mg{sub x}O combined with first principles calculations

    SciTech Connect

    Wang, Ping; Hu, Linlin; Shan, Xuefei; Yang, Yintang; Song, Jiuxu; Guo, Lixin; Zhang, Zhiyong

    2015-01-15

    Transient characteristics of wurtzite Zn{sub 1−x}Mg{sub x}O are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn{sub 1−x}Mg{sub x}O (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 10{sup 7}, 2.133 × 10{sup 7}, 1.889 × 10{sup 7}, 1.295 × 10{sup 7} cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn{sub 0.889}Mg{sub 0.111}O and Zn{sub 0.833}Mg{sub 0.167}O respectively, while it is not observed for Zn{sub 0.806}Mg{sub 0.194}O and Zn{sub 0.75}Mg{sub 0.25}O even at 3000 kV/cm which is especially important for high frequency devices.

  20. The Monte Carlo code MCPTV--Monte Carlo dose calculation in radiation therapy with carbon ions.

    PubMed

    Karg, Juergen; Speer, Stefan; Schmidt, Manfred; Mueller, Reinhold

    2010-07-07

    The Monte Carlo code MCPTV is presented. MCPTV is designed for dose calculation in treatment planning in radiation therapy with particles and especially carbon ions. MCPTV has a voxel-based concept and can perform a fast calculation of the dose distribution on patient CT data. Material and density information from CT are taken into account. Electromagnetic and nuclear interactions are implemented. Furthermore the algorithm gives information about the particle spectra and the energy deposition in each voxel. This can be used to calculate the relative biological effectiveness (RBE) for each voxel. Depth dose distributions are compared to experimental data giving good agreement. A clinical example is shown to demonstrate the capabilities of the MCPTV dose calculation.

  1. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  2. Low-energy sputterings with the Monte Carlo Program ACAT

    NASA Astrophysics Data System (ADS)

    Yamamura, Y.; Mizuno, Y.

    1985-05-01

    The Monte Carlo program ACAT was developed to determine the total sputtering yields and angular distributions of sputtered atoms in physical processes. From computer results of the incident-energy dependent sputterings for various ion-target combinations the mass-ratio dependence and the bombarding-angle dependence of sputtering thresholds was obtained with the help of the Matsunami empirical formula for sputtering yields. The mass-ratio dependence of sputtering thresholds is in good agreement with recent theoretical results. The threshold energy of light-ion sputtering is a slightly increasing function of angle of incidence, while that of heavy-ion sputtering has a minimum value near theta = 60 deg. The angular distributions of sputtered atoms are also calculated for heavy ions, medium ions, and light ions, and reasonable agreements between calculated angular distributions and experimental results are obtained.

  3. Uncovering mental representations with Markov chain Monte Carlo.

    PubMed

    Sanborn, Adam N; Griffiths, Thomas L; Shiffrin, Richard M

    2010-03-01

    A key challenge for cognitive psychology is the investigation of mental representations, such as object categories, subjective probabilities, choice utilities, and memory traces. In many cases, these representations can be expressed as a non-negative function defined over a set of objects. We present a behavioral method for estimating these functions. Our approach uses people as components of a Markov chain Monte Carlo (MCMC) algorithm, a sophisticated sampling method originally developed in statistical physics. Experiments 1 and 2 verified the MCMC method by training participants on various category structures and then recovering those structures. Experiment 3 demonstrated that the MCMC method can be used estimate the structures of the real-world animal shape categories of giraffes, horses, dogs, and cats. Experiment 4 combined the MCMC method with multidimensional scaling to demonstrate how different accounts of the structure of categories, such as prototype and exemplar models, can be tested, producing samples from the categories of apples, oranges, and grapes.

  4. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  5. Monte Carlo estimation of the number of tatami tilings

    NASA Astrophysics Data System (ADS)

    Kimura, Kenji; Higuchi, Saburo

    2016-04-01

    Motivated by the way Japanese tatami mats are placed on the floor, we consider domino tilings with a constraint and estimate the number of such tilings of plane regions. We map the system onto a monomer-dimer model with a novel local interaction on the dual lattice. We make use of a variant of the Hamiltonian replica exchange Monte Carlo method where data for ferromagnetic and anti-ferromagnetic models are combined to make a single family of histograms. The properties of the density of states is studied beyond exact enumeration and combinatorial methods. The logarithm of the number of the tilings is linear in the boundary length of the region for all the regions studied.

  6. Quantum Monte Carlo Calculations in Solids with Downfolded Hamiltonians

    NASA Astrophysics Data System (ADS)

    Ma, Fengjie; Purwanto, Wirawan; Zhang, Shiwei; Krakauer, Henry

    2015-06-01

    We present a combination of a downfolding many-body approach with auxiliary-field quantum Monte Carlo (AFQMC) calculations for extended systems. Many-body calculations operate on a simpler Hamiltonian which retains material-specific properties. The Hamiltonian is systematically improvable and allows one to dial, in principle, between the simplest model and the original Hamiltonian. As a by-product, pseudopotential errors are essentially eliminated using frozen orbitals constructed adaptively from the solid environment. The computational cost of the many-body calculation is dramatically reduced without sacrificing accuracy. Excellent accuracy is achieved for a range of solids, including semiconductors, ionic insulators, and metals. We apply the method to calculate the equation of state of cubic BN under ultrahigh pressure, and determine the spin gap in NiO, a challenging prototypical material with strong electron correlation effects.

  7. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

  8. Monte Carlo studies of model Langmuir monolayers.

    PubMed

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  9. Monte Carlo studies of model Langmuir monolayers

    NASA Astrophysics Data System (ADS)

    Opps, S. B.; Yang, B.; Gray, C. G.; Sullivan, D. E.

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters σhh and σtt, respectively. The tails consist of nt~4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with σhh=σtt, we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'2/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in Tc with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in Tc due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard surface, whereby the surfactants are only

  10. Constraining Runoff Source Fraction Contributions from Alpine Glaciers through the Combined Application of Geochemical Proxies and Bayesian Monte Carlo Isotope Mixing Models

    NASA Astrophysics Data System (ADS)

    Arendt, C. A.; Aciego, S.; Hetland, E.

    2015-12-01

    Processes that drive glacial ablation directly impact surrounding ecosystems and communities that are dependent on glacial meltwater as a freshwater reservoir: crucially, freshwater runoff from alpine and Arctic glaciers has large implications for watershed ecosystems and contingent economies. Furthermore, glacial hydrology processes are a complex and fundamental part of understanding high-latitude environments in the modern and predicting how they might change in the future. Specifically, developing better estimates of the origin of freshwater discharge, as well as the duration and amplitude of extreme melting and precipitation events, could provide crucial constraints on these processes and allow for glacial watershed systems to be modeled more effectively. In order to investigate the universality of the temporal and spatial melt relationships that exist in glacial systems, I investigate the isotopic composition of glacial meltwater and proximal seawater including stable isotopes δ18O and δD, which have been measured in glacial water samples I collected from the alpine Athabasca Glacier in the Canadian Rockies. This abstract is focused on extrapolating the relative contributions of meltwater sources - snowmelt, ice melt, and summer precipitation - using a coupled statistical-chemical model (Arendt et al., 2015). I apply δ18O and δD measurements of Athabasca Glacier subglacial water samples to a Bayesian Monte Carlo (BMC) estimation scheme. Importantly, this BMC model also assesses the uncertainties associated with these meltwater fractional contribution estimations, which provides an assessment of how well the system is constrained. By defining the proportion of overall melt that is coming from snow versus ice using stable isotopes, the volume of water generated by ablation can be calculated. This water volume has two important implications. First, communities that depend on glacial water for aquifer recharge can start assessing future water resources, as

  11. A Monte Carlo investigation of the Hamiltonian mean field model

    NASA Astrophysics Data System (ADS)

    Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea

    2005-04-01

    We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.

  12. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  13. Monte Carlo simulation of laser attenuation characteristics in fog

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi

    2011-06-01

    Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.

  14. Classical Perturbation Theory for Monte Carlo Studies of System Reliability

    SciTech Connect

    Lewins, Jeffrey D.

    2001-03-15

    A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.

  15. BACKWARD AND FORWARD MONTE CARLO METHOD IN POLARIZED RADIATIVE TRANSFER

    SciTech Connect

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-20

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  16. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  17. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  18. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  19. Photon beam description in PEREGRINE for Monte Carlo dose calculations

    SciTech Connect

    Cox, L. J., LLNL

    1997-03-04

    Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

  20. Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data

    SciTech Connect

    Zourari, K.; Peppa, V.; Papagiannis, P.; Ballester, Facundo; Siebert, Frank-André

    2014-04-15

    Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions

  1. Implementation of Monte Carlo Simulations for the Gamma Knife System

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Huang, D.; Lee, L.; Feng, J.; Morris, K.; Calugaru, E.; Burman, C.; Li, J.; Ma, C.-M.

    2007-06-01

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  2. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  3. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  4. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  5. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  6. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  7. Complexity of Monte Carlo and deterministic dose-calculation methods.

    PubMed

    Börgers, C

    1998-03-01

    Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.

  8. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods

  9. Parton distribution functions in Monte Carlo factorisation scheme

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

  10. Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.

    2014-03-01

    Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.

  11. Kinetic Monte Carlo method applied to nucleic acid hairpin folding.

    PubMed

    Sauerwine, Ben; Widom, Michael

    2011-12-01

    Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.

  12. Bayesian model comparison in cosmology with Population Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Wraith, Darren; Robert, Christian P.; Benabed, Karim; Cappé, Olivier; Cardoso, Jean-François; Fort, Gersende; Prunet, Simon; Bouchet, François R.

    2010-07-01

    We use Bayesian model selection techniques to test extensions of the standard flat Λ cold dark matter (ΛCDM) paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. In contrast to the case of other sampling-based inference techniques such as Markov chain Monte Carlo (MCMC), the Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Also, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent cosmic microwave background, type Ia supernovae and baryonic acoustic oscillation data, we find inconclusive evidence between flat ΛCDM and simple dark-energy models. A curved universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r = 0.

  13. Monte Carlo ICRH simulations in fully shaped anisotropic plasmas

    SciTech Connect

    Jucker, M.; Graves, J. P.; Cooper, W. A.; Mellet, N.; Brunner, S.

    2008-11-01

    In order to numerically study the effects of Ion Cyclotron Resonant Heating (ICRH) on the fast particle distribution function in general plasma geometries, three codes have been coupled: VMEC generates a general (2D or 3D) MHD equilibrium including full shaping and pressure anisotropy. This equilibrium is then mapped into Boozer coordinates. The full-wave code LEMan then calculates the power deposition and electromagnetic field strength of a wave field generated by a chosen antenna using a warm model. Finally, the single particle Hamiltonian code VENUS combines the outputs of the two previous codes in order to calculate the evolution of the distribution function. Within VENUS, Monte Carlo operators for Coulomb collisions of the fast particles with the background plasma have been implemented, accounting for pitch angle and energy scattering. Also, ICRH is simulated using Monte Carlo operators on the Doppler shifted resonant layer. The latter operators act in velocity space and induce a change of perpendicular and parallel velocity depending on the electric field strength and the corresponding wave vector. Eventually, the change in the distribution function will then be fed into VMEC for generating a new equilibrium and thus a self-consistent solution can be found. This model is an enhancement of previous studies in that it is able to include full 3D effects such as magnetic ripple, treat the effects of non-zero orbit width consistently and include the generation and effects of pressure anisotropy. Here, first results of coupling the three codes will be shown in 2D tokamak geometries.

  14. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  15. Analytical positron range modelling in heterogeneous media for PET Monte Carlo simulation.

    PubMed

    Lehnert, Wencke; Gregoire, Marie-Claude; Reilhac, Anthonin; Meikle, Steven R

    2011-06-07

    Monte Carlo simulation codes that model positron interactions along their tortuous path are expected to be accurate but are usually slow. A simpler and potentially faster approach is to model positron range from analytical annihilation density distributions. The aims of this paper were to efficiently implement and validate such a method, with the addition of medium heterogeneity representing a further challenge. The analytical positron range model was evaluated by comparing annihilation density distributions with those produced by the Monte Carlo simulator GATE and by quantitatively analysing the final reconstructed images of Monte Carlo simulated data. In addition, the influence of positronium formation on positron range and hence on the performance of Monte Carlo simulation was investigated. The results demonstrate that 1D annihilation density distributions for different isotope-media combinations can be fitted with Gaussian functions and hence be described by simple look-up-tables of fitting coefficients. Together with the method developed for simulating positron range in heterogeneous media, this allows for efficient modelling of positron range in Monte Carlo simulation. The level of agreement of the analytical model with GATE depends somewhat on the simulated scanner and the particular research task, but appears to be suitable for lower energy positron emitters, such as (18)F or (11)C. No reliable conclusion about the influence of positronium formation on positron range and simulation accuracy could be drawn.

  16. Quantitative Monte Carlo-based holmium-166 SPECT reconstruction

    SciTech Connect

    Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de; Viergever, Max A.

    2013-11-15

    Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80

  17. Combined experimental and Monte Carlo verification of brachytherapy plans for vaginal applicators

    NASA Astrophysics Data System (ADS)

    Sloboda, Ron S.; Wang, Ruqing

    1998-12-01

    Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.

  18. Monte Carlo simulations of the HP model (the "Ising model" of protein folding)

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Wüst, Thomas; Landau, David P.

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  19. Monte Carlo simulations of the HP model (the "Ising model" of protein folding).

    PubMed

    Li, Ying Wai; Wüst, Thomas; Landau, David P

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  20. A New Method for the Calculation of Diffusion Coefficients with Monte Carlo

    NASA Astrophysics Data System (ADS)

    Dorval, Eric

    2014-06-01

    This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.

  1. Monte Carlo based angular distribution estimation method of multiply scattered photons for underwater imaging

    NASA Astrophysics Data System (ADS)

    Li, Shengfu; Chen, Guanghua; Wang, Rongbo; Luo, Zhengxiong; Peng, Qixian

    2016-12-01

    This paper proposes a Monte Carlo (MC) based angular distribution estimation method of multiply scattered photons for underwater imaging. This method targets on turbid waters. Our method is based on applying typical Monte Carlo ideas to the present problem by combining all the points on a spherical surface. The proposed method is validated with the numerical solution of the radiative transfer equation (RTE). The simulation results based on typical optical parameters of turbid waters show that the proposed method is effective in terms of computational speed and sensitivity.

  2. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis

    SciTech Connect

    William R. Martin; John C. Lee

    2009-12-30

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  3. A proposal for a standard interface between Monte Carlo tools and one-loop programs

    SciTech Connect

    Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoche, S.; Giele, W.; Skands, P.

    2010-01-01

    Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarizes the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.

  4. A Proposal for a Standard Interface Between Monte Carlo Tools And One-Loop Programs

    SciTech Connect

    Binoth, T.; Boudjema, F.; Dissertori, G.; Lazopoulos, A.; Denner, A.; Dittmaier, S.; Frederix, R.; Greiner, N.; Hoeche, Stefan; Giele, W.; Skands, P.; Winter, J.; Gleisberg, T.; Archibald, J.; Heinrich, G.; Krauss, F.; Maitre, D.; Huber, M.; Huston, J.; Kauer, N.; Maltoni, F.; /Louvain U., CP3 /Milan Bicocca U. /INFN, Turin /Turin U. /Granada U., Theor. Phys. Astrophys. /CERN /NIKHEF, Amsterdam /Heidelberg U. /Oxford U., Theor. Phys.

    2011-11-11

    Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.

  5. Monte Carlo Analysis of Quantum Transport and Fluctuations in Semiconductors.

    DTIC Science & Technology

    1986-02-18

    methods to quantum transport within the Liouville formulation. The second part concerns with fluctuations of carrier velocities and energies both in...interactions) on the transport properties. Keywords: Monte Carlo; Charge Transport; Quantum Transport ; Fluctuations; Semiconductor Physics; Master Equation...The present report contains technical matter related to the research performed on two different subjects. The first part concerns with quantum

  6. Monte Carlo simulation by computer for life-cycle costing

    NASA Technical Reports Server (NTRS)

    Gralow, F. H.; Larson, W. J.

    1969-01-01

    Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.

  7. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  8. The Metropolis Monte Carlo Method in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2003-11-01

    A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.

  9. Quantum Monte Carlo simulation of topological phase transitions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Arata; Kimura, Taro

    2016-12-01

    We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.

  10. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  11. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  12. Diffuse photon density wave measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  13. Calculating coherent pair production with Monte Carlo methods

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.

  14. A Monte Carlo simulation of a supersaturated sodium chloride solution

    NASA Astrophysics Data System (ADS)

    Schwendinger, Michael G.; Rode, Bernd M.

    1989-03-01

    A simulation of a supersaturated sodium chloride solution with the Monte Carlo statistical thermodynamic method is reported. The water-water interactions are described by the Matsuoka-Clementi-Yoshimine (MCY) potential, while the ion-water potentials have been derived from ab initio calculations. Structural features of the solution have been evaluated, special interest being focused on possible precursors of nucleation.

  15. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  16. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  17. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  18. Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.

    PubMed

    Miller, G; Martz, H F; Little, T T; Guilmette, R

    2002-01-01

    A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time.

  19. Play It Again: Teaching Statistics with Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  20. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  1. Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.

    PubMed

    Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald

    2016-12-01

    Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.

  2. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  3. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    SciTech Connect

    Trahan, Travis John

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  4. Parallel Monte Carlo simulation of multilattice thin film growth

    NASA Astrophysics Data System (ADS)

    Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen

    2001-07-01

    This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.

  5. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  6. Monte Carlo study of the atmospheric spread function

    NASA Technical Reports Server (NTRS)

    Pearce, W. A.

    1986-01-01

    Monte Carlo radiative transfer simulations are used to study the atmospheric spread function appropriate to satellite-based sensing of the earth's surface. The parameters which are explored include the nadir angle of view, the size distribution of the atmospheric aerosol, and the aerosol vertical profile.

  7. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  8. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  9. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  10. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  11. Monte Carlo Estimation of the Electric Field in Stellarators

    NASA Astrophysics Data System (ADS)

    Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.

    1986-10-01

    The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable.

  12. Impact of random numbers on parallel Monte Carlo application

    SciTech Connect

    Pandey, Ras B.

    2002-10-22

    A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

  13. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  14. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  15. A Monte Carlo photocurrent/photoemission computer program

    NASA Technical Reports Server (NTRS)

    Chadsey, W. L.; Ragona, C.

    1972-01-01

    A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.

  16. On a full Monte Carlo approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Sellier, J. M.; Dimov, I.

    2016-12-01

    The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.

  17. Dark Photon Monte Carlo at SeaQuest

    NASA Astrophysics Data System (ADS)

    Hicks, Caleb; SeaQuest/E906 Collaboration

    2016-09-01

    Fermi National Laboratory's E906/SeaQuest is an experiment primarily designed to study the ratio of anti-down to anti-up quarks in the nucleon quark sea as a function of Bjorken x. SeaQuest's measurement is obtained by measuring the muon pairs produced by the Drell-Yan process. The experiment can also search for muon pair vertices past the target and beam dump, which would be a signature of Dark Photon decay. It is therefore necessary to run Monte Carlo simulations to determine how a changed Z vertex affects the detection and distribution of muon pairs using SeaQuest's detectors. SeaQuest has an existing Monte Carlo program that has been used for simulations of the Drell-Yan process as well as J/psi decay and other processes. The Monte Carlo program was modified to use a fixed Z vertex when generating muon pairs. Events were then generated with varying Z vertices and the resulting simulations were then analyzed. This work is focuses on the results of the Monte Carlo simulations and the effects on Dark Photon detection. This research was supported by US DOE MENP Grant DE-FG02-03ER41243.

  18. Parallel canonical Monte Carlo simulations through sequential updating of particles

    NASA Astrophysics Data System (ADS)

    O'Keeffe, C. J.; Orkoulas, G.

    2009-04-01

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  19. Parallel canonical Monte Carlo simulations through sequential updating of particles.

    PubMed

    O'Keeffe, C J; Orkoulas, G

    2009-04-07

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  20. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  1. Determining MTF of digital detector system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  2. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  3. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  4. Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick

    2017-01-01

    This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…

  5. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  6. Incorporation of Monte-Carlo Computer Techniques into Science and Mathematics Education.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1987-01-01

    Described is a Monte-Carlo method for modeling physical systems with a computer. Also discussed are ways to incorporate Monte-Carlo simulation techniques for introductory science and mathematics teaching and also for enriching computer and simulation courses. (RH)

  7. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  8. Global Monte Carlo Simulation with High Order Polynomial Expansions

    SciTech Connect

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-12-13

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source

  9. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  10. A simple new way to help speed up Monte Carlo convergence rates: Energy-scaled displacement Monte Carlo

    NASA Astrophysics Data System (ADS)

    Goldman, Saul

    1983-10-01

    A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.

  11. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  12. Monte Carlo simulations for generic granite repository studies

    SciTech Connect

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  13. Gas-surface interactions using accommodation coefficients for a dilute and a dense gas in a micro- or nanochannel: heat flux predictions using combined molecular dynamics and Monte Carlo techniques.

    PubMed

    Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D

    2014-05-01

    The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.

  14. MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS

    NASA Technical Reports Server (NTRS)

    Glass, A. B.

    1994-01-01

    The Monte Carlo Investigation of Trajectory Operations and Requirements (MONITOR) program was developed to perform spacecraft mission maneuver simulations for geosynchronous, single maneuver, and comet encounter type trajectories. MONITOR is a multifaceted program which enables the modeling of various orbital sequences and missions, the generation of Monte Carlo simulation statistics, and the parametric scanning of user requested variables over specified intervals. The MONITOR program has been used primarily to study geosynchronous missions and has the capability to model Space Shuttle deployed satellite trajectories. The ability to perform a Monte Carlo error analysis of user specified orbital parameters using predicted maneuver execution errors can make MONITOR a significant part of any mission planning and analysis system. The MONITOR program can be executed in four operational modes. In the first mode, analytic state covariance matrix propagation is performed using state transition matrices for the coasting and powered burn phases of the trajectory. A two-body central force field is assumed throughout the analysis. Histograms of the final orbital elements and other state dependent variables may be evaluated by a Monte Carlo analysis. In the second mode, geosynchronous missions can be simulated from parking orbit injection through station acquisition. A two-body central force field is assumed throughout the simulation. Nominal mission studies can be conducted; however, the main use of this mode lies in evaluating the behavior of pertinent orbital trajectory parameters by making use of a Monte Carlo analysis. In the third mode, MONITOR performs parametric scans of user-requested variables for a nominal mission. Various orbital sequences may be specified; however, primary use is devoted to geosynchronous missions. A maximum of five variables may be scanned at a time. The fourth mode simulates a mission from orbit injection through comet encounter with optional

  15. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  16. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Technical Reports Server (NTRS)

    Campbell, Roy K.

    1989-01-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better

  17. Automatic determination of primary electron beam parameters in Monte Carlo simulation

    SciTech Connect

    Pena, Javier; Gonzalez-Castano, Diego M.; Gomez, Faustino; Sanchez-Doblado, Francisco; Hartmann, Guenther H.

    2007-03-15

    In order to obtain realistic and reliable Monte Carlo simulations of medical linac photon beams, an accurate determination of the parameters that define the primary electron beam that hits the target is a fundamental step. In this work we propose a new methodology to commission photon beams in Monte Carlo simulations that ensures the reproducibility of a wide range of clinically useful fields. For such purpose accelerated Monte Carlo simulations of 2x2, 10x10, and 20x20 cm{sup 2} fields at SSD=100 cm are carried out for several combinations of the primary electron beam mean energy and radial FWHM. Then, by performing a simultaneous comparison with the correspondent measurements for these same fields, the best combination is selected. This methodology has been employed to determine the characteristics of the primary electron beams that best reproduce a Siemens PRIMUS and a Varian 2100 CD machine in the Monte Carlo simulations. Excellent agreements were obtained between simulations and measurements for a wide range of field sizes. Because precalculated profiles are stored in databases, the whole commissioning process can be fully automated, avoiding manual fine-tunings. These databases can also be used to characterize any accelerators of the same model from different sites.

  18. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    SciTech Connect

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficient as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.

  19. Monte Carlo estimation of stage structured development from cohort data.

    PubMed

    Knape, Jonas; De Valpine, Perry

    2016-04-01

    Cohort data are frequently collected to study stage-structured development and mortalities of many organisms, particularly arthropods. Such data can provide information on mean stage durations, among-individual variation in stage durations, and on mortality rates. Current statistical methods for cohort data lack flexibility in the specification of stage duration distributions and mortality rates. In this paper, we present a new method for fitting models of stage-duration distributions and mortality to cohort data. The method is based on a Monte Carlo within MCMC algorithm and provides Bayesian estimates of parameters of stage-structured cohort models. The algorithm is computationally demanding but allows for flexible specifications of stage-duration distributions and mortality rates. We illustrate the algorithm with an application to data from a previously published experiment on the development of brine shrimp from Mono Lake, California, through nine successive stages. In the experiment, three different food supply and temperature combination treatments were studied. We compare the mean duration of the stages among the treatments while simultaneously estimating mortality rates and among-individual variance of stage durations. The method promises to enable more detailed studies of development of both natural and experimental cohorts. An R package implementing the method and which allows flexible specification of stage duration distributions is provided.

  20. Non-adiabatic molecular dynamics by accelerated semiclassical Monte Carlo

    DOE PAGES

    White, Alexander J.; Gorshkov, Vyacheslav N.; Tretiak, Sergei; ...

    2015-07-07

    Non-adiabatic dynamics, where systems non-radiatively transition between electronic states, plays a crucial role in many photo-physical processes, such as fluorescence, phosphorescence, and photoisomerization. Methods for the simulation of non-adiabatic dynamics are typically either numerically impractical, highly complex, or based on approximations which can result in failure for even simple systems. Recently, the Semiclassical Monte Carlo (SCMC) approach was developed in an attempt to combine the accuracy of rigorous semiclassical methods with the efficiency and simplicity of widely used surface hopping methods. However, while SCMC was found to be more efficient than other semiclassical methods, it is not yet as efficientmore » as is needed to be used for large molecular systems. Here, we have developed two new methods: the accelerated-SCMC and the accelerated-SCMC with re-Gaussianization, which reduce the cost of the SCMC algorithm up to two orders of magnitude for certain systems. In many cases shown here, the new procedures are nearly as efficient as the commonly used surface hopping schemes, with little to no loss of accuracy. This implies that these modified SCMC algorithms will be of practical numerical solutions for simulating non-adiabatic dynamics in realistic molecular systems.« less

  1. Household water use and conservation models using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-04-01

    The increased availability of water end use measurement studies allows for more mechanistic and detailed approaches to estimating household water demand and conservation potential. This study uses, probability distributions for parameters affecting water use estimated from end use studies and randomly sampled in Monte Carlo iterations to simulate water use in a single-family residential neighborhood. This model represents existing conditions and is calibrated to metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  2. Household water use and conservation models using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  3. A Monte Carlo simulation approach for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal

    2016-04-01

    Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.

  4. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

    2009-11-01

    Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

  5. Monte Carlo Simulations of the Inside Intron Recombination

    NASA Astrophysics Data System (ADS)

    Cebrat, Stanisław; PȨKALSKI, Andrzej; Scharf, Fabian

    Biological genomes are divided into coding and non-coding regions. Introns are non-coding parts within genes, while the remaining non-coding parts are intergenic sequences. To study evolutionary significance of the inside intron recombination we have used two models based on the Monte Carlo method. In our computer simulations we have implemented the internal structure of genes by declaring the probability of recombination between exons. One situation when inside intron recombination is advantageous is recovering functional genes by combining proper exons dispersed in the genetic pool of the population after a long period without selection for the function of the gene. Populations have to pass through the bottleneck, then. These events are rather rare and we have expected that there should be other phenomena giving profits from the inside intron recombination. In fact we have found that inside intron recombination is advantageous only in the case when after recombination, besides the recombinant forms, parental haplotypes are available and selection is set already on gametes.

  6. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Radak, Brian K.; Roux, Benoît

    2016-10-01

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance. An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Finally, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.

  7. Relaxation dynamics in small clusters: A modified Monte Carlo approach

    SciTech Connect

    Pal, Barnana

    2008-02-01

    Relaxation dynamics in two-dimensional atomic clusters consisting of mono-atomic particles interacting through Lennard-Jones (L-J) potential has been investigated using Monte Carlo simulation. A modification of the conventional Metropolis algorithm is proposed to introduce realistic thermal motion of the particles moving in the interacting L-J potential field. The proposed algorithm leads to a quick equilibration from the nonequilibrium cluster configuration in a certain temperature regime, where the relaxation time ({tau}), measured in terms of Monte Carlo Steps (MCS) per particle, vary inversely with the square root of system temperature ({radical}T) and pressure (P); {tau} {proportional_to} (P{radical}T){sup -1}. From this a realistic correlation between MCS and time has been predicted.

  8. Monte Carlo Methods for Bridging the Timescale Gap

    NASA Astrophysics Data System (ADS)

    Wilding, Nigel; Landau, David P.

    We identify the origin, and elucidate the character of the extended time-scales that plague computer simulation studies of first and second order phase transitions. A brief survey is provided of a number of new and existing techniques that attempt to circumvent these problems. Attention is then focused on two novel methods with which we have particular experience: “Wang-Landau sampling” and Phase Switch Monte Carlo. Detailed case studies are made of the application of the Wang-Landau approach to calculate the density of states of the 2D Ising model and the Edwards-Anderson spin glass. The principles and operation of Phase Switch Monte Carlo are described and its utility in tackling ‘difficult’ first order phase transitions is illustrated via a case study of hard-sphere freezing. We conclude with a brief overview of promising new methods for the improvement of deterministic, spin dynamics simulations.

  9. Monte Carlo Ground State Energy for Trapped Boson Systems

    NASA Astrophysics Data System (ADS)

    Rudd, Ethan; Mehta, N. P.

    2012-06-01

    Diffusion Monte Carlo (DMC) and Green's Function Monte Carlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0φ 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).

  10. Application of Monte Carlo simulations to improve basketball shooting strategy

    NASA Astrophysics Data System (ADS)

    Min, Byeong June

    2016-10-01

    The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.

  11. A tetrahedron-based inhomogeneous Monte Carlo optical simulator.

    PubMed

    Shen, H; Wang, G

    2010-02-21

    Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program 'MCML' was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon-triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS.

  12. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  13. Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography

    SciTech Connect

    Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.

    2000-02-01

    The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.

  14. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  15. Monte Carlo Study of Real Time Dynamics on the Lattice

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  16. Minimising biases in full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  17. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.

  18. Monte Carlo methods for light propagation in biological tissues.

    PubMed

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2015-11-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements.

  19. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    NASA Astrophysics Data System (ADS)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  20. Monte Carlo modeling of coherent scattering: Influence of interference

    SciTech Connect

    Leliveld, C.J.; Maas, J.G.; Bom, V.R.; Eijk, C.W.E. van

    1996-12-01

    In this study, the authors present Monte Carlo (MC) simulation results for the intensity and angular distribution of scattered radiation from cylindrical absorbers. For coherent scattering the authors have taken into account the effects of interference by using new molecular form factor data for the AAPM plastic materials and water. The form factor data were compiled from X-ray diffraction measurements. The new data have been implemented in the authors` Electron Gamma Shower (EGS4) Monte Carlo system. The hybrid MC simulation results show a significant influence on the intensity and the angular distribution of coherently scattered photons. They conclude that MC calculations are significantly in error when interference effects are ignored in the model for coherent scattering. Especially for simulation studies of scattered radiation in collimated geometries, where small angle scattering will prevail, the coherent scatter contribution is highly overestimated when conventional form factor data are used.

  1. Condensed Matter Applications of Quantum Monte Carlo at the Petascale

    NASA Astrophysics Data System (ADS)

    Ceperley, David

    2014-03-01

    Applications of the Quantum Monte Carlo method have a number of advantages allowing them to be useful for high performance computation. The method scales well in particle number, can treat complex systems with weak or strong correlation including disordered systems, and large thermal and zero point effects of the nuclei. The methods are adaptable to a variety of computer architectures and have multiple parallelization strategies. Most errors are under control so that increases in computer resources allow a systematic increase in accuracy. We will discuss a number of recent applications of Quantum Monte Carlo including dense hydrogen and transition metal systems and suggest future directions. Support from DOE grants DE-FG52-09NA29456, SCIDAC DE-SC0008692, the Network for Ab Initio Many-Body Methods and INCITE allocation.

  2. Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.

    PubMed

    Leigh, Jessica W; Bryant, David

    2015-09-01

    Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.

  3. Lattice-switch Monte Carlo: the fcc—bcc problem

    NASA Astrophysics Data System (ADS)

    Underwood, T. L.; Ackland, G. J.

    2015-09-01

    Lattice-switch Monte Carlo is an efficient method for calculating the free energy difference between two solid phases, or a solid and a fluid phase. Here, we provide a brief introduction to the method, and list its applications since its inception. We then describe a lattice switch for the fcc and bcc phases based on the Bain orientation relationship. Finally, we present preliminary results regarding our application of the method to the fcc and bcc phases in the Lennard-Jones system. Our initial calculations reveal that the bcc phase is unstable, quickly degenerating into some as yet undetermined metastable solid phase. This renders conventional lattice-switch Monte Carlo intractable for this phase. Possible solutions to this problem are discussed.

  4. Fast Off-Lattice Monte Carlo Simulations with Soft Potentials

    NASA Astrophysics Data System (ADS)

    Zong, Jing; Yang, Delian; Yin, Yuhua; Zhang, Xinghua; Wang, Qiang (David)

    2011-03-01

    Fast off-lattice Monte Carlo simulations with soft repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as the hard-sphere or Lennard-Jones repulsion). Here we present our fast off-lattice Monte Carlo simulations ranging from small-molecule soft spheres and liquid crystals to polymeric systems including homopolymers and rod-coil diblock copolymers. The simulation results are compared with various theories based on the same Hamiltonian as in the simulations (thus without any parameter-fitting) to quantitatively reveal the consequences of approximations in these theories. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).

  5. Quantum Monte Carlo calculations with chiral effective field theory interactions.

    PubMed

    Gezerlis, A; Tews, I; Epelbaum, E; Gandolfi, S; Hebeler, K; Nogga, A; Schwenk, A

    2013-07-19

    We present the first quantum Monte Carlo (QMC) calculations with chiral effective field theory (EFT) interactions. To achieve this, we remove all sources of nonlocality, which hamper the inclusion in QMC calculations, in nuclear forces to next-to-next-to-leading order. We perform auxiliary-field diffusion Monte Carlo (AFDMC) calculations for the neutron matter energy up to saturation density based on local leading-order, next-to-leading order, and next-to-next-to-leading order nucleon-nucleon interactions. Our results exhibit a systematic order-by-order convergence in chiral EFT and provide nonperturbative benchmarks with theoretical uncertainties. For the softer interactions, perturbative calculations are in excellent agreement with the AFDMC results. This work paves the way for QMC calculations with systematic chiral EFT interactions for nuclei and nuclear matter, for testing the perturbativeness of different orders, and allows for matching to lattice QCD results by varying the pion mass.

  6. Drag coefficient modeling for grace using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.

    2013-12-01

    Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.

  7. Fixed-node diffusion Monte Carlo method for lithium systems

    NASA Astrophysics Data System (ADS)

    Rasch, K. M.; Mitas, L.

    2015-07-01

    We study lithium systems over a range of a number of atoms, specifically atomic anion, dimer, metallic cluster, and body-centered-cubic crystal, using the fixed-node diffusion Monte Carlo method. The focus is on analysis of the fixed-node errors of each system, and for that purpose we test several orbital sets in order to provide the most accurate nodal hypersurfaces. The calculations include both core and valence electrons in order to avoid any possible impact by pseudopotentials. To quantify the fixed-node errors, we compare our results to other highly accurate calculations, and wherever available, to experimental observations. The results for these Li systems show that the fixed-node diffusion Monte Carlo method achieves accurate total energies, recovers 96 -99 % of the correlation energy, and estimates binding energies with errors bounded by 0.1 eV /at .

  8. Accelerated Monte Carlo simulations with restricted Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Huang, Li; Wang, Lei

    2017-01-01

    Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.

  9. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    SciTech Connect

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using a simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.

  10. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    DOE PAGES

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less

  11. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  12. Monte Carlo calculation of patient organ doses from computed tomography.

    PubMed

    Oono, Takeshi; Araki, Fujio; Tsuduki, Shoya; Kawasaki, Keiichi

    2014-01-01

    In this study, we aimed to evaluate quantitatively the patient organ dose from computed tomography (CT) using Monte Carlo calculations. A multidetector CT unit (Aquilion 16, TOSHIBA Medical Systems) was modeled with the GMctdospp (IMPS, Germany) software based on the EGSnrc Monte Carlo code. The X-ray spectrum and the configuration of the bowtie filter for the Monte Carlo modeling were determined from the chamber measurements for the half-value layer (HVL) of aluminum and the dose profile (off-center ratio, OCR) in air. The calculated HVL and OCR were compared with measured values for body irradiation with 120 kVp. The Monte Carlo-calculated patient dose distribution was converted to the absorbed dose measured by a Farmer chamber with a (60)Co calibration factor at the center of a CT water phantom. The patient dose was evaluated from dose-volume histograms for the internal organs in the pelvis. The calculated Al HVL was in agreement within 0.3% with the measured value of 5.2 mm. The calculated dose profile in air matched the measured value within 5% in a range of 15 cm from the central axis. The mean doses for soft tissues were 23.5, 23.8, and 27.9 mGy for the prostate, rectum, and bladder, respectively, under exposure conditions of 120 kVp, 200 mA, a beam pitch of 0.938, and beam collimation of 32 mm. For bones of the femur and pelvis, the mean doses were 56.1 and 63.6 mGy, respectively. The doses for bone increased by up to 2-3 times that of soft tissue, corresponding to the ratio of their mass-energy absorption coefficients.

  13. Monte Carlo simulation of virtual Compton scattering below pion threshold

    NASA Astrophysics Data System (ADS)

    Janssens, P.; Van Hoorebeke, L.; Fonvieille, H.; D'Hose, N.; Bertin, P. Y.; Bensafa, I.; Degrande, N.; Distler, M.; Di Salvo, R.; Doria, L.; Friedrich, J. M.; Friedrich, J.; Hyde-Wright, Ch.; Jaminion, S.; Kerhoas, S.; Laveissière, G.; Lhuillier, D.; Marchand, D.; Merkel, H.; Roche, J.; Tamas, G.; Vanderhaeghen, M.; Van de Vyver, R.; Van de Wiele, J.; Walcher, Th.

    2006-10-01

    This paper describes the Monte Carlo simulation developed specifically for the Virtual Compton Scattering (VCS) experiments below pion threshold that have been performed at MAMI and JLab. This simulation generates events according to the (Bethe-Heitler + Born) cross-section behaviour and takes into account all relevant resolution-deteriorating effects. It determines the "effective" solid angle for the various experimental settings which are used for the precise determination of the photon electroproduction absolute cross-section.

  14. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    SciTech Connect

    D.P. Stotler

    2005-06-09

    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  15. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  16. Instantaneous GNSS attitude determination: A Monte Carlo sampling approach

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-04-01

    A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.

  17. Improved numerical techniques for processing Monte Carlo thermal scattering data

    SciTech Connect

    Schmidt, E; Rose, P

    1980-01-01

    As part of a Thermal Benchmark Validation Program sponsored by the Electric Power Research Institute (EPRI), the National Nuclear Data Center has been calculating thermal reactor lattices using the SAM-F Monte Carlo Computer Code. As part of this program a significant improvement has been made in the adequacy of the numerical procedures used to process the thermal differential scattering cross sections for hydrogen bound in H/sub 2/O.

  18. Monte Carlo Methods and Applications for the Nuclear Shell Model

    SciTech Connect

    Dean, D.J.; White, J.A.

    1998-08-10

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  19. Monte Carlo simulation of the ELIMED beamline using Geant4

    NASA Astrophysics Data System (ADS)

    Pipek, J.; Romano, F.; Milluzzo, G.; Cirrone, G. A. P.; Cuttone, G.; Amico, A. G.; Margarone, D.; Larosa, G.; Leanza, R.; Petringa, G.; Schillaci, F.; Scuderi, V.

    2017-03-01

    In this paper, we present a Geant4-based Monte Carlo application for ELIMED beamline [1-6] simulation, including its features and several preliminary results. We have developed the application to aid the design of the beamline, to estimate various beam characteristics, and to assess the amount of secondary radiation. In future, an enhanced version of this application will support the beamline users when preparing their experiments.

  20. Monte Carlo Studies of the Fcc Ising Model.

    NASA Astrophysics Data System (ADS)

    Polgreen, Thomas Lee

    Monte Carlo simulations are performed on the antiferromagnetic fcc Ising model which is relevant to the binary alloy CuAu. The model exhibits a first-order ordering transition as a function of temperature. The lattice free energy of the model is determined for all temperatures. By matching free energies of the ordered and disordered phases, the transition temperature is determined to be T(,t) = 1.736 J where J is the coupling constant of the model. The free energy as determined by series expansion and the Kikuchi cluster variation method is compared with the Monte Carlo results. These methods work well for the ordered phase, but not for the disordered phase. A determination of the pair correlation in the disordered phase along the {100} direction indicates a correlation length of (DBLTURN) 2.5a at the phase transition. The correlation length exhibits mean-field-like temperature dependence. The Cowley-Warren short range order parameters are determined as a function of temperature for the first twelve nearest-neighbor shells of this model. The Monte Carlo results are used to determine the free parameter in a mean-field-like class of theories described by Clapp and Moss. The ability of these theories to predict ratios between pair potentials is tested with these results. In addition, evidence of a region of heterophase fluctuations is presented in agreement with x-ray diffuse scattering measurements on Cu(,3)Au. The growth of order following a rapid quench from disorder is studied by means of a dynamic Monte Carlo simulation. The results compare favorably with the Landau theory proposed by Chan for temperatures near the first-order phase transition. For lower temperatures, the results are in agreement with the theories of Lifshitz and Allen and Chan. In the intermediate temperature range, our extension of Chan's theory is able to explain our simulation results and recent experimental results.

  1. Monte Carlo approach to nuclei and nuclear matter

    SciTech Connect

    Fantoni, Stefano; Gandolfi, Stefano; Illarionov, Alexey Yu.; Schmidt, Kevin E.; Pederiva, Francesco

    2008-10-13

    We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.

  2. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    Monte Carlo, confidence interval, central limit theorem, number of iterations, Wilson score method, Wald method, normal probability plot 16. SECURITY...Iterations 16 6. Conclusions 17 7. References and Notes 20 Appendix. MATLAB Code to Produce a Normal Probability Plot for Data in Array A 23...for normality can be performed to quantify the confidence level of a normality assumption. The basic idea of an NPP is to plot the sample data in

  3. Monte Carlo verification of IMRT treatment plans on grid.

    PubMed

    Gómez, Andrés; Fernández Sánchez, Carlos; Mouriño Gallego, José Carlos; López Cacheiro, Javier; González Castaño, Francisco J; Rodríguez-Silva, Daniel; Domínguez Carrera, Lorena; González Martínez, David; Pena García, Javier; Gómez Rodríguez, Faustino; González Castaño, Diego; Pombar Cameán, Miguel

    2007-01-01

    The eIMRT project is producing new remote computational tools for helping radiotherapists to plan and deliver treatments. The first available tool will be the IMRT treatment verification using Monte Carlo, which is a computational expensive problem that can be executed remotely on a GRID. In this paper, the current implementation of this process using GRID and SOA technologies is presented, describing the remote execution environment and the client.

  4. Mcfast, a Parameterized Fast Monte Carlo for Detector Studies

    NASA Astrophysics Data System (ADS)

    Boehnlein, Amber S.

    McFast is a modularized and parameterized fast Monte Carlo program which is designed to generate physics analysis information for different detector configurations and subdetector designs. McFast is based on simple geometrical object definitions and includes hit generation, parameterized track generation, vertexing, a muon system, electromagnetic calorimetry, and trigger framework for physics studies. Auxiliary tools include a geometry editor, visualization, and an i/o system.

  5. Direct Monte Carlo Simulations of Hypersonic Viscous Interactions Including Separation

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Rault, Didier F. G.; Price, Joseph M.

    1993-01-01

    Results of calculations obtained using the direct simulation Monte Carlo method for Mach 25 flow over a control surface are presented. The numerical simulations are for a 35-deg compression ramp at a low-density wind-tunnel test condition. Calculations obtained using both two- and three-dimensional solutions are reviewed, and a qualitative comparison is made with the oil flow pictures highlight separation and three-dimensional flow structure.

  6. Distributional monte carlo methods for the boltzmann equation

    NASA Astrophysics Data System (ADS)

    Schrock, Christopher R.

    Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.

  7. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    PubMed Central

    Engel, Georg P.; Schaefer, Stefan

    2011-01-01

    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733

  8. Monte-Carlo Spray Cooling Model

    NASA Astrophysics Data System (ADS)

    Kreitzer, Paul J.; Kuhlman, John M.

    2010-01-01

    Spray cooling is a tremendously complex phenomenon that has yet to be completely and successfully modeled. This is due to the complexity of the detailed droplet impingement processes and the subsequent heat transfer process. Numerous assumptions must be made in order to accurately model spray behavior. Current computational limitations restrict CFD simulations to single droplet simulations. Additional complexity due to droplet interactions negates the possibility of combining multiple single droplet studies to represent the complete spray process. Therefore, a need has been established for the development of a comprehensive spray impingement simulation with adequate physical complexity to yield accurate results within a relatively short run time. The present work attempts to develop such a model using modeling assumptions from the best available literature, and to combine them into a single spray impingement simulation. Initial flow parameters that have been chosen include flow rate of 10 GPH with a velocity of 12 m/s and average droplet diameter of 48 μm. These values produce the following non-dimensional number ranges: We 100-1800, Re 200-4500, Oh 0.01-0.05. Numerical and experimental correlations have been identified that represent crater formation, splashing, film thickness, and droplet size and spatial flux distributions. A combination of these methods has resulted in an initial spray impingement simulation that is capable of simulating 100,000 drops or an actual simulation time of 0.0167 seconds. Comparisons of results from this code with experimental results show a similar trend in surface behavior.

  9. Valence-bond quantum Monte Carlo algorithms defined on trees.

    PubMed

    Deschner, Andreas; Sørensen, Erik S

    2014-09-01

    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.

  10. Comparison of deterministic and Monte Carlo methods in shielding design.

    PubMed

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  11. A New Approach to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  12. Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations

    SciTech Connect

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.

    2003-11-15

    The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.

  13. Chemical accuracy from quantum Monte Carlo for the benzene dimer

    SciTech Connect

    Azadi, Sam; Cohen, R. E.

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  14. Monte Carlo simulations of electron transport in strongly attaching gases

    NASA Astrophysics Data System (ADS)

    Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa

    2016-09-01

    Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.

  15. VARIANCE ESTIMATION IN DOMAIN DECOMPOSED MONTE CARLO EIGENVALUE CALCULATIONS

    SciTech Connect

    Mervin, Brenden T; Maldonado, G. Ivan; Mosher, Scott W; Evans, Thomas M; Wagner, John C

    2012-01-01

    The number of tallies performed in a given Monte Carlo calculation is limited in most modern Monte Carlo codes by the amount of memory that can be allocated on a single processor. By using domain decomposition, the calculation is now limited by the total amount of memory available on all processors, allowing for significantly more tallies to be performed. However, decomposing the problem geometry introduces significant issues with the way tally statistics are conventionally calculated. In order to deal with the issue of calculating tally variances in domain decomposed environments for the Shift hybrid Monte Carlo code, this paper presents an alternative approach for reactor scenarios in which an assumption is made that once a particle leaves a domain, it does not reenter the domain. Particles that reenter the domain are instead treated as separate independent histories. This assumption introduces a bias that inevitably leads to under-prediction of the calculated variances for tallies within a few mean free paths of the domain boundaries. However, through the use of different decomposition strategies, primarily overlapping domains, the negative effects of such an assumption can be significantly reduced to within reasonable levels.

  16. Improved diffusion coefficients generated from Monte Carlo codes

    SciTech Connect

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-07-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  17. Accelerating Monte Carlo power studies through parametric power estimation.

    PubMed

    Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C

    2016-04-01

    Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.

  18. Monte Carlo simulations of particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Ellison, Donald C.; Jones, Frank C.

    1994-01-01

    The Fermi shock acceleration mechanism may be responsible for the production of high-energy cosmic rays in a wide variety of environments. Modeling of this phenomenon has largely focused on plane-parallel shocks, and one of the most promising techniques for its study is the Monte Carlo simulation of particle transport in shocked fluid flows. One of the principal problems in shock acceleration theory is the mechanism and efficiency of injection of particles from the thermal gas into the accelerated population. The Monte Carlo technique is ideally suited to addressing the injection problem directly, and previous applications of it to the quasi-parallel Earth bow shock led to very successful modeling of proton and heavy ion spectra, as well as other observed quantities. Recently this technique has been extended to oblique shock geometries, in which the upstream magnetic field makes a significant angle Theta(sub B1) to the shock normal. Spectral resutls from test particle Monte Carlo simulations of cosmic-ray acceleration at oblique, nonrelativistic shocks are presented. The results show that low Mach number shocks have injection efficiencies that are relatively insensitive to (though not independent of) the shock obliquity, but that there is a dramatic drop in efficiency for shocks of Mach number 30 or more as the obliquity increases above 15 deg. Cosmic-ray distributions just upstream of the shock reveal prominent bumps at energies below the thermal peak; these disappear far upstream but might be observable features close to astrophysical shocks.

  19. Progress on coupling UEDGE and Monte-Carlo simulation codes

    SciTech Connect

    Rensink, M.E.; Rognlien, T.D.

    1996-08-28

    Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.

  20. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  1. Estimating return period of landslide triggering by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Peres, D. J.; Cancelliere, A.

    2016-10-01

    Assessment of landslide hazard is a crucial step for landslide mitigation planning. Estimation of the return period of slope instability represents a quantitative method to map landslide triggering hazard on a catchment. The most common approach to estimate return periods consists in coupling a triggering threshold equation, derived from an hydrological and slope stability process-based model, with a rainfall intensity-duration-frequency (IDF) curve. Such a traditional approach generally neglects the effect of rainfall intensity variability within events, as well as the variability of initial conditions, which depend on antecedent rainfall. We propose a Monte Carlo approach for estimating the return period of shallow landslide triggering which enables to account for both variabilities. Synthetic hourly rainfall-landslide data generated by Monte Carlo simulations are analysed to compute return periods as the mean interarrival time of a factor of safety less than one. Applications are first conducted to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to evaluate the traditional IDF-based method by comparison with the Monte Carlo one. Results show that return period is affected significantly by variability of both rainfall intensity within events and of initial conditions, and that the traditional IDF-based approach may lead to an overestimation of the return period of landslide triggering, or, in other words, a non-conservative assessment of landslide hazard.

  2. MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS

    SciTech Connect

    Roth, Nathaniel; Kasen, Daniel

    2015-03-15

    We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.

  3. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  4. Monte Carlo modelling of positron transport in real world applications

    NASA Astrophysics Data System (ADS)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  5. Monte Carlo simulations of parapatric speciation

    NASA Astrophysics Data System (ADS)

    Schwämmle, V.; Sousa, A. O.; de Oliveira, S. M.

    2006-06-01

    Parapatric speciation is studied using an individual-based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.

  6. Predicting the orientation of protein G B1 on hydrophobic surfaces using Monte Carlo simulations

    PubMed Central

    Harrison, Elisa T.; Weidner, Tobias; Castner, David G.; Interlandi, Gianluca

    2016-01-01

    A Monte Carlo algorithm was developed to predict the most likely orientations of protein G B1, an immunoglobulin G (IgG) antibody-binding domain of protein G, adsorbed onto a hydrophobic surface. At each Monte Carlo step, the protein was rotated and translated as a rigid body. The assumption about rigidity was supported by quartz crystal microbalance with dissipation monitoring experiments, which indicated that protein G B1 adsorbed on a polystyrene surface with its native structure conserved and showed that its IgG antibody-binding activity was retained. The Monte Carlo simulations predicted that protein G B1 is likely adsorbed onto a hydrophobic surface in two different orientations, characterized as two mutually exclusive sets of amino acids contacting the surface. This was consistent with sum frequency generation (SFG) vibrational spectroscopy results. In fact, theoretical SFG spectra calculated from an equal combination of the two predicted orientations exhibited reasonable agreement with measured spectra of protein G B1 on polystyrene surfaces. Also, in explicit solvent molecular dynamics simulations, protein G B1 maintained its predicted orientation in three out of four runs. This work shows that using a Monte Carlo approach can provide an accurate estimate of a protein orientation on a hydrophobic surface, which complements experimental surface analysis techniques and provides an initial system to study the interaction between a protein and a surface in molecular dynamics simulations. PMID:27923271

  7. Quantum-trajectory Monte Carlo method for study of electron-crystal interaction in STEM.

    PubMed

    Ruan, Z; Zeng, R G; Ming, Y; Zhang, M; Da, B; Mao, S F; Ding, Z J

    2015-07-21

    In this paper, a novel quantum-trajectory Monte Carlo simulation method is developed to study electron beam interaction with a crystalline solid for application to electron microscopy and spectroscopy. The method combines the Bohmian quantum trajectory method, which treats electron elastic scattering and diffraction in a crystal, with a Monte Carlo sampling of electron inelastic scattering events along quantum trajectory paths. We study in this work the electron scattering and secondary electron generation process in crystals for a focused incident electron beam, leading to understanding of the imaging mechanism behind the atomic resolution secondary electron image that has been recently achieved in experiment with a scanning transmission electron microscope. According to this method, the Bohmian quantum trajectories have been calculated at first through a wave function obtained via a numerical solution of the time-dependent Schrödinger equation with a multislice method. The impact parameter-dependent inner-shell excitation cross section then enables the Monte Carlo sampling of ionization events produced by incident electron trajectories travelling along atom columns for excitation of high energy knock-on secondary electrons. Following cascade production, transportation and emission processes of true secondary electrons of very low energies are traced by a conventional Monte Carlo simulation method to present image signals. Comparison of the simulated image for a Si(110) crystal with the experimental image indicates that the dominant mechanism of atomic resolution of secondary electron image is the inner-shell ionization events generated by a high-energy electron beam.

  8. Use of Monte Carlo simulations in the assessment of calibration strategies-Part I: an introduction to Monte Carlo mathematics.

    PubMed

    Burrows, John

    2013-04-01

    An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.

  9. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  10. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  11. Monte Carlo study of a Cyberknife stereotactic radiosurgery system

    SciTech Connect

    Araki, Fujio

    2006-08-15

    This study investigated small-field dosimetry for a Cyberknife stereotactic radiosurgery system using Monte Carlo simulations. The EGSnrc/BEAMnrc Monte Carlo code was used to simulate the Cyberknife treatment head, and the DOSXYZnrc code was implemented to calculate central axis depth-dose curves, off-axis dose profiles, and relative output factors for various circular collimator sizes of 5 to 60 mm. Water-to-air stopping power ratios necessary for clinical reference dosimetry of the Cyberknife system were also evaluated by Monte Carlo simulations. Additionally, a beam quality conversion factor, k{sub Q}, for the Cyberknife system was evaluated for cylindrical ion chambers with different wall material. The accuracy of the simulated beam was validated by agreement within 2% between the Monte Carlo calculated and measured central axis depth-dose curves and off-axis dose profiles. The calculated output factors were compared with those measured by a diode detector and an ion chamber in water. The diode output factors agreed within 1% with the calculated values down to a 10 mm collimator. The output factors with the ion chamber decreased rapidly for collimators below 20 mm. These results were confirmed by the comparison to those from Monte Carlo methods with voxel sizes and materials corresponding to both detectors. It was demonstrated that the discrepancy in the 5 and 7.5 mm collimators for the diode detector is due to the water nonequivalence of the silicon material, and the dose fall-off for the ion chamber is due to its large active volume against collimators below 20 mm. The calculated stopping power ratios of the 60 mm collimator from the Cyberknife system (without a flattening filter) agreed within 0.2% with those of a 10x10 cm{sup 2} field from a conventional linear accelerator with a heavy flattening filter and the incident electron energy, 6 MeV. The difference in the stopping power ratios between 5 and 60 mm collimators was within 0.5% at a 10 cm depth in

  12. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A. . Dept. of Nuclear Engineering); Rathkopf, J.A. ); Litvin, M. )

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our pool of tasks'' technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  13. Experiences with different parallel programming paradigms for Monte Carlo particle transport leads to a portable toolkit for parallel Monte Carlo

    SciTech Connect

    Martin, W.R.; Majumdar, A.; Rathkopf, J.A.; Litvin, M.

    1993-04-01

    Monte Carlo particle transport is easy to implement on massively parallel computers relative to other methods of transport simulation. This paper describes experiences of implementing a realistic demonstration Monte Carlo code on a variety of parallel architectures. Our ``pool of tasks`` technique, which allows reproducibility from run to run regardless of the number of processors, is discussed. We present detailed timing studies of simulations performed on the 128 processor BBN-ACI TC2000 and preliminary timing results for the 32 processor Kendall Square Research KSR-1. Given sufficient workload to distribute across many computational nodes, the BBN achieves nearly linear speedup for a large number of nodes. The KSR, with which we have had less experience, performs poorly with more than ten processors. A simple model incorporating known causes of overhead accurately predicts observed behavior. A general-purpose communication and control package to facilitate the implementation of existing Monte Carlo packages is described together with timings on the BBN. This package adds insignificantly to the computational costs of parallel simulations.

  14. Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor

    2005-06-01

    New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.

  15. Searching for convergence in phylogenetic Markov chain Monte Carlo.

    PubMed

    Beiko, Robert G; Keith, Jonathan M; Harlow, Timothy J; Ragan, Mark A

    2006-08-01

    Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.

  16. Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics

    NASA Astrophysics Data System (ADS)

    Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.

    2012-12-01

    Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.

  17. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  18. Monte Carlo simulation of light propagation in the adult brain

    NASA Astrophysics Data System (ADS)

    Mudra, Regina M.; Nadler, Andreas; Keller, Emanuella; Niederer, Peter

    2004-06-01

    When near infrared spectroscopy (NIRS) is applied noninvasively to the adult head for brain monitoring, extra-cerebral bone and surface tissue exert a substantial influence on the cerebral signal. Most attempts to subtract extra-cerebral contamination involve spatially resolved spectroscopy (SRS). However, inter-individual variability of anatomy restrict the reliability of SRS. We simulated the light propagation with Monte Carlo techniques on the basis of anatomical structures determined from 3D-magnetic resonance imaging (MRI) exhibiting a voxel resolution of 0.8 x 0.8 x 0.8 mm3 for three different pairs of T1/T2 values each. The MRI data were used to define the material light absorption and dispersion coefficient for each voxel. The resulting spatial matrix was applied in the Monte Carlo Simulation to determine the light propagation in the cerebral cortex and overlaying structures. The accuracy of the Monte Carlo Simulation was furthermore increased by using a constant optical path length for the photons which was less than the median optical path length of the different materials. Based on our simulations we found a differential pathlength factor (DPF) of 6.15 which is close to with the value of 5.9 found in the literature for a distance of 4.5cm between the external sensors. Furthermore, we weighted the spatial probability distribution of the photons within the different tissues with the probabilities of the relative blood volume within the tissue. The results show that 50% of the NIRS signal is determined by the grey matter of the cerebral cortex which allows us to conclude that NIRS can produce meaningful cerebral blood flow measurements providing that the necessary corrections for extracerebral contamination are included.

  19. Vibrational studies and Monte Carlo simulations of the sorption of aromatic carbonyls in faujasitic zeolites

    NASA Astrophysics Data System (ADS)

    Brémard, C.; Buntinx, G.; Ginestet, G.

    1997-06-01

    Combined experimental spectroscopy (Raman and DRIFT), Monte Carlo simulations and geometry optimizations were used to investigate the location and conformation of benzophenone and benzil molecules incorporated into faujasitic Na 56FAU zeolite. The benzophenone and benzil molecules are located within the supercage, the CO fragment pointing towards the extraframework Na + cations. The geometry of the incorporated molecules is found to be slightly modified relative to the free molecule. At high coverage, the benzil molecules are associated in pairs in the supercage.

  20. Calculating rovibrationally excited states of H2D+ and HD2+ by combination of fixed node and multi-state rotational diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ford, Jason E.; McCoy, Anne B.

    2016-02-01

    In this work the efficacy of a combined approach for capturing rovibrational coupling is investigated. Specifically, the multi-state rotational DMC method is used in combination with fixed-node DMC in a study of the rotation vibration energy levels of H2D+ and HD2+. Analysis of the results of these calculations shows very good agreement between the calculated energies and previously reported values. Where differences are found, they can be attributed to Coriolis couplings, which are large in these ions and which are not fully accounted for in this approach.

  1. Monte-Carlo histories of refractory interstellar dust

    NASA Technical Reports Server (NTRS)

    Clayton, D. D.; Liffman, K.

    1988-01-01

    Monte-carlo histories of 6 x 10 to the 6th individual dust particles injected uniformly from stars into the interstellar medium during a 6 x 10 to the 9th year history are calculated. The particles are given a two-phase internal structure of successive thermal condensates, and are distributed in initial radius as 1/a-cubed over the value of a between 0.01 and 0.1 micron. The evolution of this system illustrates the distinction between several different lifetimes for interstellar dust. Most are destroyed, but some grow in size. Several important consequences for interstellar dust are described.

  2. Novel extrapolation method in the Monte Carlo shell model

    SciTech Connect

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-12-15

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of {sup 56}Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g{sub 9/2}-shell calculation of {sup 64}Ge.

  3. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  4. More about Zener drag studies with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz

    2013-03-01

    Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.

  5. Current status of the PSG Monte Carlo neutron transport code

    SciTech Connect

    Leppaenen, J.

    2006-07-01

    PSG is a new Monte Carlo neutron transport code, developed at the Technical Research Centre of Finland (VTT). The code is mainly intended for fuel assembly-level reactor physics calculations, such as group constant generation for deterministic reactor simulator codes. This paper presents the current status of the project and the essential capabilities of the code. Although the main application of PSG is in lattice calculations, the geometry is not restricted in two dimensions. This paper presents the validation of PSG against the experimental results of the three-dimensional MOX fuelled VENUS-2 reactor dosimetry benchmark. (authors)

  6. Monte Carlo simulations of charge transport in heterogeneous organic semiconductors

    NASA Astrophysics Data System (ADS)

    Aung, Pyie Phyo; Khanal, Kiran; Luettmer-Strathmann, Jutta

    2015-03-01

    The efficiency of organic solar cells depends on the morphology and electronic properties of the active layer. Research teams have been experimenting with different conducting materials to achieve more efficient solar panels. In this work, we perform Monte Carlo simulations to study charge transport in heterogeneous materials. We have developed a coarse-grained lattice model of polymeric photovoltaics and use it to generate active layers with ordered and disordered regions. We determine carrier mobilities for a range of conditions to investigate the effect of the morphology on charge transport.

  7. Continuous-Estimator Representation for Monte Carlo Criticality Diagnostics

    SciTech Connect

    Kiedrowski, Brian C.; Brown, Forrest B.

    2012-06-18

    An alternate means of computing diagnostics for Monte Carlo criticality calculations is proposed. Overlapping spherical regions or estimators are placed covering the fissile material with a minimum center-to-center separation of the 'fission distance', which is defined herein, and a radius that is some multiple thereof. Fission neutron production is recorded based upon a weighted average of proximities to centers for all the spherical estimators. These scores are used to compute the Shannon entropy, and shown to reproduce the value, to within an additive constant, determined from a well-placed mesh by a user. The spherical estimators are also used to assess statistical coverage.

  8. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  9. PREFACE: First European Workshop on Monte Carlo Treatment Planning

    NASA Astrophysics Data System (ADS)

    Reynaert, Nick

    2007-07-01

    The "First European Workshop on Monte Carlo treatment planning", was an initiative of the European working group on Monte Carlo treatment planning (EWG-MCTP). It was organised at Ghent University (Belgium) on 22-25October 2006. The meeting was very successful and was attended by 150 participants. The impressive list of invited speakers and the scientific contributions (posters and oral presentations) have led to a very interesting program, that was well appreciated by all attendants. In addition, the presence of seven vendors of commercial MCTP software systems provided serious added value to the workshop. For each vendor, a representative has given a presentation in a dedicated session, explaining the current status of their system. It is clear that, for "traditional" radiotherapy applications (using photon or electron beams), Monte Carlo dose calculations have become the state of the art, and are being introduced into almost all commercial treatment planning systems. Invited lectures illustrated that scientific challenges are currently associated with 4D applications (e.g. respiratory motion) and the introduction of MC dose calculations in inverse planning. But it was striking that the Monte Carlo technique is also becoming very important in more novel treatment modalities such as BNCT, hadron therapy, stereotactic radiosurgery, Tomotherapy, etc. This emphasizes the continuous growing interest in MCTP. The people who attended the dosimetry session will certainly remember the high level discussion on the determination of correction factors for different ion chambers, used in small fields. The following proceedings will certainly confirm the high scientific level of the meeting. I would like to thank the members of the local organizing committee for all the hard work done before, during and after this meeting. The organisation of such an event is not a trivial task and it would not have been possible without the help of all my colleagues. I would also like to thank

  10. Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations

    SciTech Connect

    O'Brien, M; Taylor, J; Procassini, R

    2004-12-22

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations.

  11. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  12. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  13. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  14. Implict Monte Carlo Radiation Transport Simulations of Four Test Problems

    SciTech Connect

    Gentile, N

    2007-08-01

    Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.

  15. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    SciTech Connect

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos; Alfè, Dario; Lilienfeld, O. Anatole von

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  16. Disorder-induced genetic divergence: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Schulz, Michael; Pȩkalski, Andrzej

    2002-10-01

    We present a Monte Carlo simulation of a system composed of several populations, each living in a possibly different habitat. We show the influence of landscape disorder on the genetic pool of finite populations. We demonstrate that a strongly disordered environment generates an increase of the genetic distance between the populations on identical island. The distance becomes permanent for infinitely long times. On the contrary, landscapes with weak disorder offer only a temporarily allelic divergence which vanishes in the long time limit. Similarities between these phenomena and the well-known first-order phase transitions in the thermodynamics are analyzed.

  17. Cluster Monte Carlo simulations of the nematic-isotropic transition

    NASA Astrophysics Data System (ADS)

    Priezjev, N. V.; Pelcovits, Robert A.

    2001-06-01

    We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.

  18. Monte Carlo study of neutrino acceleration in supernova shocks

    NASA Technical Reports Server (NTRS)

    Kazanas, D.; Ellison, D. C.

    1981-01-01

    The first order Fermi acceleration mechanism of cosmic rays in shocks may be at work for neutrinos in supernova shocks when the latter are at densities greater than 10 to the 13th g/cu cm, at which the core material is opaque to neutrinos. A Monte Carlo approach to study this effect is employed, and the emerging neutrino power law spectra are presented. The increased energy acquired by the neutrinos may facilitate their detection in supernova explosions and provide information about the physics of collapse.

  19. Markov chain Monte Carlo method without detailed balance.

    PubMed

    Suwa, Hidemaro; Todo, Synge

    2010-09-17

    We present a specific algorithm that generally satisfies the balance condition without imposing the detailed balance in the Markov chain Monte Carlo. In our algorithm, the average rejection rate is minimized, and even reduced to zero in many relevant cases. The absence of the detailed balance also introduces a net stochastic flow in a configuration space, which further boosts up the convergence. We demonstrate that the autocorrelation time of the Potts model becomes more than 6 times shorter than that by the conventional Metropolis algorithm. Based on the same concept, a bounce-free worm algorithm for generic quantum spin models is formulated as well.

  20. Exploring mass perception with Markov chain Monte Carlo.

    PubMed

    Cohen, Andrew L; Ross, Michael G

    2009-12-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants' perceptions of different collision mass ratios. The results reveal interparticipant differences and a qualitative distinction between the perception of 1:1 and 1:2 ratios. The results strongly suggest that participants' perceptions of 1:1 collisions are described by simple heuristics. The evidence for 1:2 collisions favors heuristic perception models that are sensitive to the sign but not the magnitude of perceived mass differences.

  1. Multidimensional master equation and its Monte-Carlo simulation.

    PubMed

    Pang, Juan; Bai, Zhan-Wu; Bao, Jing-Dong

    2013-02-28

    We derive an integral form of multidimensional master equation for a markovian process, in which the transition function is obtained in terms of a set of discrete Langevin equations. The solution of master equation, namely, the probability density function is calculated by using the Monte-Carlo composite sampling method. In comparison with the usual Langevin-trajectory simulation, the present approach decreases effectively coarse-grained error. We apply the master equation to investigate time-dependent barrier escape rate of a particle from a two-dimensional metastable potential and show the advantage of this approach in the calculations of quantities that depend on the probability density function.

  2. Parallelized quantum Monte Carlo algorithm with nonlocal worm updates.

    PubMed

    Masaki-Kato, Akiko; Suzuki, Takafumi; Harada, Kenji; Todo, Synge; Kawashima, Naoki

    2014-04-11

    Based on the worm algorithm in the path-integral representation, we propose a general quantum Monte Carlo algorithm suitable for parallelizing on a distributed-memory computer by domain decomposition. Of particular importance is its application to large lattice systems of bosons and spins. A large number of worms are introduced and its population is controlled by a fictitious transverse field. For a benchmark, we study the size dependence of the Bose-condensation order parameter of the hard-core Bose-Hubbard model with L×L×βt=10240×10240×16, using 3200 computing cores, which shows good parallelization efficiency.

  3. Communication: Variation after response in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Neuscamman, Eric

    2016-08-01

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing coupled cluster's accuracy for excitations with significant doubly excited character.

  4. Active neutron multiplicity analysis and Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.

  5. Non-Boltzmann Ensembles and Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Murthy, K. P. N.

    2016-10-01

    Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g

  6. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  7. MONTE CARLO ADVANCES FOR THE EOLUS ASCI PROJECT

    SciTech Connect

    J. S. HENDRICK; G. W. MCKINNEY; L. J. COX

    2000-01-01

    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  8. Monte Carlo simulation of vibrational relaxation in nitrogen

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1990-01-01

    Monte Carlo simulation of nonequilibrium vibrational relaxation of (rotationless) N2 using transition probabilities form an extended SSH theory is presented. For the range of temperatures considered, 4000-8000 K, the vibrational levels were found to be reasonably close to an equilibrium distribution at an average vibrational temperature based on the vibrational energy of the gas. As a result, they do not show any statistically significant evidence of the bottleneck observed in earlier studies of N2. Based on this finding, it appears that, for the temperature range considered, dissociation commences after all vibrational levels equilibrate at the translational temperature.

  9. Analysis of real-time networks with monte carlo methods

    NASA Astrophysics Data System (ADS)

    Mauclair, C.; Durrieu, G.

    2013-12-01

    Communication networks in embedded systems are ever more large and complex. A better understanding of the dynamics of these networks is necessary to use them at best and lower costs. Todays tools are able to compute upper bounds of end-to-end delays that a packet being sent through the network could suffer. However, in the case of asynchronous networks, those worst end-to-end delay (WEED) cases are rarely observed in practice or through simulations due to the scarce situations that lead to worst case scenarios. A novel approach based on Monte Carlo methods is suggested to study the effects of the asynchrony on the performances.

  10. Monte Carlo simulation of photon scattering in biological tissue models.

    PubMed

    Kumar, D; Chacko, S; Singh, M

    1999-10-01

    Monte Carlo simulation of photon scattering, with and without abnormal tissue placed at various locations in the rectangular, semi-circular and semi-elliptical tissue models, has been carried out. The absorption coefficient of the tissue considered as abnormal is high and its scattering coefficient low compared to that of the control tissue. The placement of the abnormality at various locations within the models affects the transmission and surface emission of photons at various locations. The scattered photons originating from deeper layers make the maximum contribution at farther distances from the beam entry point. The contribution of various layers to photon scattering provides valuable data on variability of internal composition. Introduction.

  11. Error propagation in first-principles kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Döpking, Sandra; Matera, Sebastian

    2017-04-01

    First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.

  12. Studying the information content of TMDs using Monte Carlo generators

    SciTech Connect

    Avakian, H.; Matevosyan, H.; Pasquini, B.; Schweitzer, P.

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  13. Cluster Monte Carlo simulations of the nematic-isotropic transition.

    PubMed

    Priezjev, N V; Pelcovits, R A

    2001-06-01

    We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.

  14. Bond-updating mechanism in cluster Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Heringa, J. R.; Blöte, H. W. J.

    1994-03-01

    We study a cluster Monte Carlo method with an adjustable parameter: the number of energy levels of a demon mediating the exchange of bond energy with the heat bath. The efficiency of the algorithm in the case of the three-dimensional Ising model is studied as a function of the number of such levels. The optimum is found in the limit of an infinite number of levels, where the method reproduces the Wolff or the Swendsen-Wang algorithm. In this limit the size distribution of flipped clusters approximates a power law more closely than that for a finite number of energy levels.

  15. MCNP{trademark} Monte Carlo: A precis of MCNP

    SciTech Connect

    Adams, K.J.

    1996-06-01

    MCNP{trademark} is a general purpose three-dimensional time-dependent neutron, photon, and electron transport code. It is highly portable and user-oriented, and backed by stringent software quality assurance practices and extensive experimental benchmarks. The cross section database is based upon the best evaluations available. MCNP incorporates state-of-the-art analog and adaptive Monte Carlo techniques. The code is documented in a 600 page manual which is augmented by numerous Los Alamos technical reports which detail various aspects of the code. MCNP represents over a megahour of development and refinement over the past 50 years and an ongoing commitment to excellence.

  16. FZ2MC: A Tool for Monte Carlo Transport Code Geometry Manipulation

    SciTech Connect

    Hackel, B M; Nielsen Jr., D E; Procassini, R J

    2009-02-25

    The process of creating and validating combinatorial geometry representations of complex systems for use in Monte Carlo transport simulations can be both time consuming and error prone. To simplify this process, a tool has been developed which employs extensions of the Form-Z commercial solid modeling tool. The resultant FZ2MC (Form-Z to Monte Carlo) tool permits users to create, modify and validate Monte Carlo geometry and material composition input data. Plugin modules that export this data to an input file, as well as parse data from existing input files, have been developed for several Monte Carlo codes. The FZ2MC tool is envisioned as a 'universal' tool for the manipulation of Monte Carlo geometry and material data. To this end, collaboration on the development of plug-in modules for additional Monte Carlo codes is desired.

  17. Modeling focusing Gaussian beams in a turbid medium with Monte Carlo simulations.

    PubMed

    Hokr, Brett H; Bixler, Joel N; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J; Yakovlev, Vladislav V; Scully, Marlan O

    2015-04-06

    Monte Carlo techniques are the gold standard for studying light propagation in turbid media. Traditional Monte Carlo techniques are unable to include wave effects, such as diffraction; thus, these methods are unsuitable for exploring focusing geometries where a significant ballistic component remains at the focal plane. Here, a method is presented for accurately simulating photon propagation at the focal plane, in the context of a traditional Monte Carlo simulation. This is accomplished by propagating ballistic photons along trajectories predicted by Gaussian optics until they undergo an initial scattering event, after which, they are propagated through the medium by a traditional Monte Carlo technique. Solving a known problem by building upon an existing Monte Carlo implementation allows this method to be easily implemented in a wide variety of existing Monte Carlo simulations, greatly improving the accuracy of those models for studying dynamics in a focusing geometry.

  18. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    PubMed

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-07

    the energy of the radiation source and the underlying photon cross sections as well as the I-value of media involved in the simulation. The combined standard uncertainty of the Monte Carlo calculation yields 0.78% as a conservative estimation. The result of the calculation is close to the experimental result and with each combined standard uncertainty  <1%, the accuracy of EGSnrc is confirmed. The setup and methodology of this study can be employed to benchmark other Monte Carlo codes for the calculation of absorbed dose in radiotherapy.

  19. A Monte Carlo Method for Multi-Objective Correlated Geometric Optimization

    DTIC Science & Technology

    2014-05-01

    PAGES 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 May 2014 Final A Monte Carlo Method for...requiring computationally intensive algorithms for optimization. This report presents a method developed for solving such systems using a Monte Carlo...performs a Monte Carlo optimization to provide geospatial intelligence on entity placement using OpenCL framework. The solutions for optimal

  20. A Monte Carlo Radiation Model for Simulating Rarefied Multiphase Plume Flows

    DTIC Science & Technology

    2005-05-01

    Paper 2005-0964, 2005. 9Farmer, J. T., and Howell, J. R., “ Monte Carlo Prediction of Radiative Heat Transfer in Inhomogeneous, Anisotropic...Spectroscopy and Radiative Transfer , Vol. 50, No. 5, 1993, pp. 511-530. 14Everson, J., and Nelson, H. F., “Development and Application of a Reverse Monte Carlo ...Engineering University of Michigan, Ann Arbor, MI 48109 A Monte Carlo ray trace radiation model is presented for the determination of radiative

  1. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    SciTech Connect

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  2. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    SciTech Connect

    Hall, Clifford; Ji, Weixiao; Blaisten-Barojas, Estela

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  3. Combined MRI-PET scanner: A Monte Carlo evaluation of the improvements in PET resolution due to the effects of a static homogeneous magnetic field

    SciTech Connect

    Raylman, R.R.; Hammer, B.E.; Christensen, N.L.

    1996-08-01

    Positron emission tomography (PET) relies upon the detection of photons resulting from the annihilation of positrons emitted by a radiopharmaceutical. The combination of images obtained with PET and magnetic resonance imaging (MRI) have begun to greatly enhance the study of many physiological processes. A combined MRI-PET scanner could alleviate much of the spatial and temporal coregistration difficulties currently encountered in utilizing images from these complementary imaging modalities. In addition, the resolution of the PET scanner could be improved by the effects of the magnetic field. In this computer study, the utilization of a strong static homogeneous magnetic field to increase PET resolution by reducing the effects of positron range and photon noncollinearity was investigated. The results reveal that significant enhancement of resolution can be attained. For example, an approximately 27% increase in resolution is predicted for a PET scanner incorporating a 10-Tesla magnetic field. Most of this gain in resolution is due to magnetic confinement of the emitted positrons. Although the magnetic field does mix some positronium states resulting in slightly less photon noncollinearity, this reduction does not significantly affect resolution. Photon noncollinearity remains as the fundamental limiting factor of large PET scanner resolution.

  4. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon

  5. A Monte Carlo Dispersion Analysis of the X-33 Simulation Software

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2001-01-01

    A Monte Carlo dispersion analysis has been completed on the X-33 software simulation. The simulation is based on a preliminary version of the software and is primarily used in an effort to define and refine how a Monte Carlo dispersion analysis would have been done on the final flight-ready version of the software. This report gives an overview of the processes used in the implementation of the dispersions and describes the methods used to accomplish the Monte Carlo analysis. Selected results from 1000 Monte Carlo runs are presented with suggestions for improvements in future work.

  6. Computation of electron diode characteristics by monte carlo method including effect of collisions.

    NASA Technical Reports Server (NTRS)

    Goldstein, C. M.

    1964-01-01

    Consistent field Monte Carlo method calculation for collision effect on electron-ion diode characteristics and for hard sphere electron- neutral collision effect for monoenergetic- thermionic emission

  7. Monte Carlo analysis of energy dependent anisotropy of bremsstrahlung x-ray spectra

    SciTech Connect

    Kakonyi, Robert; Erdelyi, Miklos; Szabo, Gabor

    2009-09-15

    The energy resolved emission angle dependence of x-ray spectra was analyzed by MCNPX (Monte Carlo N particle Monte Carlo) simulator. It was shown that the spectral photon flux had a maximum at a well-defined emission angle due to the anisotropy of the bremsstrahlung process. The higher the relative photon energy, the smaller the emission angle belonging to the maximum was. The trends predicted by the Monte Carlo simulations were experimentally verified. The Monte Carlo results were compared to both the Institute of Physics and Engineering in Medicine spectra table and the SPEKCALCV1.0 code.

  8. Monte Carlo study of Siemens PRIMUS photoneutron production.

    PubMed

    Pena, J; Franco, L; Gómez, F; Iglesias, A; Pardo, J; Pombar, M

    2005-12-21

    Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components necessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.

  9. Monte Carlo study of Siemens PRIMUS photoneutron production

    NASA Astrophysics Data System (ADS)

    Pena, J.; Franco, L.; Gómez, F.; Iglesias, A.; Pardo, J.; Pombar, M.

    2005-12-01

    Neutron production in radiotherapy facilities has been studied from the early days of modern linacs. Detailed studies are now possible using photoneutron capabilities of general-purpose Monte Carlo codes at energies of interest in medical physics. The present work studies the effects of modelling different accelerator head and room geometries on the neutron fluence and spectra predicted via Monte Carlo. The results from the simulation of a 15 MV Siemens PRIMUS linac show an 80% increase in the fluence scored at the isocentre when, besides modelling the components neccessary for electron/photon simulations, other massive accelerator head components are included. Neutron fluence dependence on inner treatment room volume is analysed showing that thermal neutrons have a 'gaseous' behaviour and then a 1/V dependence. Neutron fluence maps for three energy ranges, fast (E > 0.1 MeV), epithermal (1 eV < E < 0.1 MeV) and thermal (E < 1 eV), are also presented and the influence of the head components on them is discussed.

  10. On the time scale associated with Monte Carlo simulations

    SciTech Connect

    Bal, Kristof M. Neyts, Erik C.

    2014-11-28

    Uniform-acceptance force-bias Monte Carlo (fbMC) methods have been shown to be a powerful technique to access longer timescales in atomistic simulations allowing, for example, phase transitions and growth. Recently, a new fbMC method, the time-stamped force-bias Monte Carlo (tfMC) method, was derived with inclusion of an estimated effective timescale; this timescale, however, does not seem able to explain some of the successes the method. In this contribution, we therefore explicitly quantify the effective timescale tfMC is able to access for a variety of systems, namely a simple single-particle, one-dimensional model system, the Lennard-Jones liquid, an adatom on the Cu(100) surface, a silicon crystal with point defects and a highly defected graphene sheet, in order to gain new insights into the mechanisms by which tfMC operates. It is found that considerable boosts, up to three orders of magnitude compared to molecular dynamics, can be achieved for solid state systems by lowering of the apparent activation barrier of occurring processes, while not requiring any system-specific input or modifications of the method. We furthermore address the pitfalls of using the method as a replacement or complement of molecular dynamics simulations, its ability to explicitly describe correct dynamics and reaction mechanisms, and the association of timescales to MC simulations in general.

  11. Quantum Monte Carlo finite temperature electronic structure of quantum dots

    NASA Astrophysics Data System (ADS)

    Leino, Markku; Rantala, Tapio T.

    2002-08-01

    Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We test the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in one and three dimensional harmonic oscillator potentials and apply it in evaluation of finite temperature effects of single and coupled quantum dots. Our simulations show the correct finite temperature excited state populations including degeneracy in cases of one and three dimensional harmonic oscillators. The simulated one and two electron distributions of a single and coupled quantum dots are compared to those from experiments and other theoretical (0 K) methods [3]. Distributions are shown to agree and the finite temperature effects are discussed. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. Other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).

  12. High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin

    2014-06-01

    Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.

  13. Monte Carlo simulation of classical spin models with chaotic billiards.

    PubMed

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  14. Improved criticality convergence via a modified Monte Carlo iteration method

    SciTech Connect

    Booth, Thomas E; Gubernatis, James E

    2009-01-01

    Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.

  15. Path integral Monte Carlo on a lattice. II. Bound states

    NASA Astrophysics Data System (ADS)

    O'Callaghan, Mark; Miller, Bruce N.

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations.

  16. Treatment planning for a small animal using Monte Carlo simulation

    SciTech Connect

    Chow, James C. L.; Leung, Michael K. K.

    2007-12-15

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  17. Treatment planning for a small animal using Monte Carlo simulation.

    PubMed

    Chow, James C L; Leung, Michael K K

    2007-12-01

    The development of a small animal model for radiotherapy research requires a complete setup of customized imaging equipment, irradiators, and planning software that matches the sizes of the subjects. The purpose of this study is to develop and demonstrate the use of a flexible in-house research environment for treatment planning on small animals. The software package, called DOSCTP, provides a user-friendly platform for DICOM computed tomography-based Monte Carlo dose calculation using the EGSnrcMP-based DOSXYZnrc code. Validation of the treatment planning was performed by comparing the dose distributions for simple photon beam geometries calculated through the Pinnacle3 treatment planning system and measurements. A treatment plan for a mouse based on a CT image set by a 360-deg photon arc is demonstrated. It is shown that it is possible to create 3D conformal treatment plans for small animals with consideration of inhomogeneities using small photon beam field sizes in the diameter range of 0.5-5 cm, with conformal dose covering the target volume while sparing the surrounding critical tissue. It is also found that Monte Carlo simulation is suitable to carry out treatment planning dose calculation for small animal anatomy with voxel size about one order of magnitude smaller than that of the human.

  18. Multicanonical Monte Carlo for Simulation of Optical Links

    NASA Astrophysics Data System (ADS)

    Bononi, Alberto; Rusch, Leslie A.

    Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.

  19. Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy

    PubMed Central

    Tennant, Sarah J.; Burgess, Donna R.; Rybak, Jeffrey M.; Martin, Craig A.; Burgess, David S.

    2015-01-01

    Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA) of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs) from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets. PMID:27025644

  20. Monte Carlo track structure for radiation biology and space applications

    NASA Technical Reports Server (NTRS)

    Nikjoo, H.; Uehara, S.; Khvostunov, I. G.; Cucinotta, F. A.; Wilson, W. E.; Goodhead, D. T.

    2001-01-01

    Over the past two decades event by event Monte Carlo track structure codes have increasingly been used for biophysical modelling and radiotherapy. Advent of these codes has helped to shed light on many aspects of microdosimetry and mechanism of damage by ionising radiation in the cell. These codes have continuously been modified to include new improved cross sections and computational techniques. This paper provides a summary of input data for ionizations, excitations and elastic scattering cross sections for event by event Monte Carlo track structure simulations for electrons and ions in the form of parametric equations, which makes it easy to reproduce the data. Stopping power and radial distribution of dose are presented for ions and compared with experimental data. A model is described for simulation of full slowing down of proton tracks in water in the range 1 keV to 1 MeV. Modelling and calculations are presented for the response of a TEPC proportional counter irradiated with 5 MeV alpha-particles. Distributions are presented for the wall and wall-less counters. Data shows contribution of indirect effects to the lineal energy distribution for the wall counters responses even at such a low ion energy.