Science.gov

Sample records for quasi-monte carlo integration

  1. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  2. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  3. A quasi-Monte Carlo Metropolis algorithm

    PubMed Central

    Owen, Art B.; Tribble, Seth D.

    2005-01-01

    This work presents a version of the Metropolis–Hastings algorithm using quasi-Monte Carlo inputs. We prove that the method yields consistent estimates in some problems with finite state spaces and completely uniformly distributed inputs. In some numerical examples, the proposed method is much more accurate than ordinary Metropolis–Hastings sampling. PMID:15956207

  4. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  5. Precision measurement of the top quark mass in the lepton + jets channel using a matrix element method with Quasi-Monte Carlo integration

    SciTech Connect

    Lujan, Paul Joseph

    2009-12-01

    This thesis presents a measurement of the top quark mass obtained from p$\\bar{p}$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t$\\bar{t}$ likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t$\\bar{t}$ likelihood as a 2-D function of the top pole mass mt and ΔJES, where ΔJES parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing ΔJES into the likelihood, we can use the information contained in W boson decays to constrain ΔJES and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb-1 of integrated luminosity, requiring events with a lepton, large ET, and exactly four high-energy jets in the pseudorapidity range |η| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure mt = 172.6 ± 0.9 (stat.) ± 0.7 (JES) ± 1.1 (syst.) GeV/c2, or mt = 172.6 ± 1.6 (tot.) GeV/c2.

  6. Quasi-Monte Carlo methods for lattice systems: A first look

    NASA Astrophysics Data System (ADS)

    Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

    2014-03-01

    We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte

  7. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  8. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are

  9. Quasi-Monte Carlo, quasi-random numbers and quasi-error estimates

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald

    We discuss quasi-random number sequences as a basis for numerical integration with potentially better convergence properties than standard Monte Carlo. The importance of the discrepancy as both a measure of smoothness of distribution and an ingredient in the error estimate is reviewed. It is argued that the classical Koksma-Hlawka inequality is not relevant for error estimates in realistic cases, and a new class of error estimates is presented, based on a generalization of the Woźniakowski lemma.

  10. Quasi Monte Carlo-based Isotropic Distribution of Gradient Directions for Improved Reconstruction Quality of 3D EPR Imaging

    PubMed Central

    Ahmad, Rizwan; Deng, Yuanmu; Vikram, Deepti S.; Clymer, Bradley; Srinivasan, Parthasarathy; Zweier, Jay L.; Kuppusamy, Periannan

    2007-01-01

    In continuous wave (CW) electron paramagnetic resonance imaging (EPRI), high quality of reconstructed image along with fast and reliable data acquisition is highly desirable for many biological applications. An accurate representation of uniform distribution of projection data is necessary to ensure high reconstruction quality. The current techniques for data acquisition suffer from nonuniformities or local anisotropies in the distribution of projection data and present a poor approximation of a true uniform and isotropic distribution. In this work, we have implemented a technique based on Quasi-Monte Carlo method to acquire projections with more uniform and isotropic distribution of data over a 3D acquisition space. The proposed technique exhibits improvements in the reconstruction quality in terms of both mean-square-error and visual judgment. The effectiveness of the suggested technique is demonstrated using computer simulations and 3D EPRI experiments. The technique is robust and exhibits consistent performance for different object configurations and orientations. PMID:17095271

  11. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  12. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  13. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational

  14. Path integral Monte Carlo on a lattice: extended states.

    PubMed

    O'Callaghan, Mark; Miller, Bruce N

    2014-04-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and atoms are restricted to the sites of a one-dimensional lattice. A path integral formalism is developed within the context of the canonical ensemble in which the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. For the case of a free particle, analytical expressions for the energy, its fluctuations, and the qp-qp correlation function are derived and compared with the Monte Carlo simulations. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp for a periodic interaction potential, forcing the qp to occupy extended states. We consider a striped potential in one dimension, where every other lattice site is occupied by an atom with potential ε, and every other lattice site is empty. This potential serves as a stress test for the path integral formalism because of its rapid site-to-site variation. An analytical solution was determined in this case by utilizing Bloch's theorem due to the periodicity of the potential. Comparisons of the potential energy, the total energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the analytical calculations. PMID:24827210

  15. Monte Carlo Integration Using Spatial Structure of Markov Random Field

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki

    2015-03-01

    Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.

  16. Path integral Monte Carlo on a lattice. II. Bound states.

    PubMed

    O'Callaghan, Mark; Miller, Bruce N

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations. PMID:27575090

  17. Monte Carlo modeling of an integrating sphere reflectometer.

    PubMed

    Prokhorov, Alexander V; Mekhontsev, Sergey N; Hanssen, Leonard M

    2003-07-01

    The Monte Carlo method has been applied to numerical modeling of an integrating sphere designed for hemispherical-directional reflectance factor measurements. It is shown that a conventional algorithm of backward ray tracing used for estimation of characteristics of the radiation field at a given point has slow convergence for small source-to-sphere-diameter ratios. A newly developed algorithm that substantially improves the convergence by calculation of direct source-induced irradiation for every point of diffuse reflection of rays traced is described. The method developed is applied to an integrating sphere reflectometer for the visible and infrared spectral ranges. Parametric studies of hemispherical radiance distributions for radiation incident onto the sample center were performed. The deviations of measured sample reflectance from the actual reflectance as a result of various factors were computed. The accuracy of the results, adequacy of the reflectance model, and other important aspects of the algorithm implementation are discussed. PMID:12868822

  18. Spinor path integral Quantum Monte Carlo for fermions

    NASA Astrophysics Data System (ADS)

    Shin, Daejin; Yousif, Hosam; Shumway, John

    2007-03-01

    We have developed a continuous-space path integral method for spin 1/2 fermions with fixed-phase approximation. The internal spin degrees of freedom of each particle is represented by four extra dimensions. This effectively maps each spinor onto two of the excited states of a four dimensional harmonic oscillator. The phases that appear in the problem can be treated within the fixed-phase approximation. This mapping preserves rotational invariance and allows us to treat spin interactions and fermionic exchange on equal footing, which may lead to new theoretical insights. The technique is illustrated for a few simple models, including a spin in a magnetic field and interacting electrons in a quantum dot in a magnetic field at finite temperature. We will discuss possible extensions of the method to molecules and solids using variational and diffusion Quantum Monte Carlo.

  19. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  20. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  1. Initial evaluation of Centroidal Voronoi Tessellation method for statistical sampling and function integration.

    SciTech Connect

    Romero, Vicente Jose; Peterson, Janet S.; Burkhardt, John V.; Gunzburger, Max Donald

    2003-09-01

    A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.

  2. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  3. Path-integral Monte Carlo study of asymmetric quantum quadrupolar rotors with fourth-order propagators

    NASA Astrophysics Data System (ADS)

    Park, Sungjin; Shin, Hyeondeok; Kwon, Yongkyung

    2012-08-01

    The recently-proposed fourth-order propagator based on the multi-product expansion has been applied to path-integral Monte Carlo calculations for asymmetric quantum quadruploar rotors fixed at face-centered cubic lattice sites. The rotors are observed to undergo an orientational orderdisorder phase transition at a low temperature when the electric quadrupole-quadrupole interaction is strong enough. At intermediate interaction strength, a further decrease of temperature after the first transition to the ordered phase results in a reentrant transition back to the disordered phase. The theoretical phase diagram of these asymmetric rotors determined by using fourth-order path-integral Monte Carlo calculations is found to be in good quantitative agreement with the experimental one for solid hydrogen deuteride. This leads us to conclude that the fourth-order propagator can be effectively implemented for an accurate path-integral Monte Carlo calculation of a quantum many-body system with rotational degrees of freedom.

  4. Solution of the Bartels-Kwiecinski-Praszalowicz equation via Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Chachamis, Grigorios; Sabio Vera, Agustín

    2016-08-01

    We present a method of solution of the Bartels-Kwiecinski-Praszalowicz (BKP) equation based on the numerical integration of iterated integrals in transverse momentum and rapidity space. As an application, our procedure, which makes use of Monte Carlo integration techniques, is applied to obtain the gluon Green function in the Odderon case at leading order. The same approach can be used for more complicated scenarios.

  5. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    SciTech Connect

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  6. The Development and Diagnostic Evaluation of the Monte Carlo Integration Computer as a Teaching Aid.

    ERIC Educational Resources Information Center

    Wood, Dean A.

    This document outlines the operation of the Monte Carlo Integration Computer (MCIC), which is capable of simulating several types of chemical processes. Some data obtained through the MCIC simulation of physical processes are presented in graphs. After giving reasons for not using the initially contemplated summative research procedures for…

  7. On the ground state calculation of a many-body system using a self-consistent basis and quasi-Monte Carlo: An application to water hexamer

    NASA Astrophysics Data System (ADS)

    Georgescu, Ionuţ; Jitomirskaya, Svetlana; Mandelshtam, Vladimir A.

    2013-11-01

    Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.

  8. On the ground state calculation of a many-body system using a self-consistent basis and quasi-Monte Carlo: An application to water hexamer

    SciTech Connect

    Georgescu, Ionuţ Mandelshtam, Vladimir A.; Jitomirskaya, Svetlana

    2013-11-28

    Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.

  9. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  10. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics.

    PubMed

    Hey, Jody; Nielsen, Rasmus

    2007-02-20

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  11. Worm algorithm and diagrammatic Monte Carlo: A new approach to continuous-space path integral Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Boninsegni, M.; Prokof'Ev, N. V.; Svistunov, B. V.

    2006-09-01

    A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature. The algorithm is formulated within the general path integral Monte Carlo (PIMC) scheme, but also allows one to perform quantum simulations in the grand canonical ensemble, as well as to compute off-diagonal imaginary-time correlation functions, such as the Matsubara Green function, simultaneously with diagonal observables. Another important innovation consists of the expansion of the attractive part of the pairwise potential energy into elementary (diagrammatic) contributions, which are then statistically sampled. This affords a complete microscopic account of the long-range part of the potential energy, while keeping the computational complexity of all updates independent of the size of the simulated system. The computational scheme allows for efficient calculations of the superfluid fraction and off-diagonal correlations in space-time, for system sizes which are orders of magnitude larger than those accessible to conventional PIMC. We present illustrative results for the superfluid transition in bulk liquid He4 in two and three dimensions, as well as the calculation of the chemical potential of hcp He4 .

  12. Permutation blocking path integral Monte Carlo approach to the uniform electron gas at finite temperature.

    PubMed

    Dornheim, Tobias; Schoof, Tim; Groth, Simon; Filinov, Alexey; Bonitz, Michael

    2015-11-28

    The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for N = 33 electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, rs=r¯/aB≳1. These data have been complemented by configuration path integral Monte Carlo (CPIMC) simulations for rs ≤ 1 that substantially deviate from RPIMC towards smaller rs and low temperature. In this work, we present results from an independent third method-the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim et al., New J. Phys. 17, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to half the Fermi temperature (θ = kBT/EF = 0.5) and, therefore, to compare our results to both aforementioned methods. While we find excellent agreement with CPIMC, where results are available, we observe deviations from RPIMC that are beyond the statistical errors and increase with density. PMID:26627944

  13. First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison

    SciTech Connect

    Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F.; Giglietto, N.; Loparco, F.; Marangelli, B.; Mazziotta, M.N.; Mirizzi, N.; Raino, S.; Spinelli, P.; /Bari U. /INFN, Bari

    2007-02-15

    GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.

  14. Lévy-Ciesielski random series as a useful platform for Monte Carlo path integral sampling.

    PubMed

    Predescu, Cristian

    2005-04-01

    We demonstrate that the Lévy-Ciesielski implementation of Lie-Trotter products enjoys several properties that make it extremely suitable for path-integral Monte Carlo simulations: fast computation of paths, fast Monte Carlo sampling, and the ability to use different numbers of time slices for the different degrees of freedom, commensurate with the quantum effects. It is demonstrated that a Monte Carlo simulation for which particles or small groups of variables are updated in a sequential fashion has a statistical efficiency that is always comparable to or better than that of an all-particle or all-variable update sampler. The sequential sampler results in significant computational savings if updating a variable costs only a fraction of the cost for updating all variables simultaneously or if the variables are independent. In the Lévy-Ciesielski representation, the path variables are grouped in a small number of layers, with the variables from the same layer being statistically independent. The superior performance of the fast sampling algorithm is shown to be a consequence of these observations. Both mathematical arguments and numerical simulations are employed in order to quantify the computational advantages of the sequential sampler, the Lévy-Ciesielski implementation of path integrals, and the fast sampling algorithm. PMID:15903818

  15. Quantum Mechanical Single Molecule Partition Function from PathIntegral Monte Carlo Simulations

    SciTech Connect

    Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian

    2006-10-01

    An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented.

  16. Quantum mechanical single molecule partition function from path integral Monte Carlo simulations.

    PubMed

    Chempath, Shaji; Predescu, Cristian; Bell, Alexis T

    2006-06-21

    An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented. PMID:16821901

  17. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  18. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    NASA Astrophysics Data System (ADS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  19. Path-integral Monte Carlo simulation of the second layer of 4He adsorbed on graphite

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon; Manousakis, Efstratios

    1999-02-01

    We have developed a path-integral Monte Carlo method for simulating helium films and apply it to the second layer of helium adsorbed on graphite. We use helium-helium and helium-graphite interactions that are found from potentials which realistically describe the interatomic interactions. The Monte Carlo sampling is over both particle positions and permutations of particle labels. From the particle configurations and static structure factor calculations, we find that this layer possesses, in order of increasing density, a superfluid liquid phase, a 7×7 commensurate solid phase that is registered with respect to the first layer, and an incommensurate solid phase. By applying the Maxwell construction to the dependence of the low-temperature total energy on the coverage, we are able to identify coexistence regions between the phases. From these, we deduce an effectively zero-temperature phase diagram. Our phase boundaries are in agreement with heat capacity and torsional oscillator measurements, and demonstrate that the experimentally observed disruption of the superfluid phase is caused by the growth of the commensurate phase. We further observe that the superfluid phase has a transition temperature consistent with the two-dimensional value. Promotion to the third layer occurs for densities above 0.212 atom/Å 2, in good agreement with experiment. Finally, we calculate the specific heat for each phase and obtain peaks at temperatures in general agreement with experiment.

  20. An integrated Monte Carlo dosimetric verification system for radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Mizowaki, T.; Miyabe, Y.; Takegawa, H.; Narita, Y.; Yano, S.; Nagata, Y.; Teshima, T.; Hiraoka, M.

    2007-04-01

    An integrated Monte Carlo (MC) dose calculation system, MCRTV (Monte Carlo for radiotherapy treatment plan verification), has been developed for clinical treatment plan verification, especially for routine quality assurance (QA) of intensity-modulated radiotherapy (IMRT) plans. The MCRTV system consists of the EGS4/PRESTA MC codes originally written for particle transport through the accelerator, the multileaf collimator (MLC), and the patient/phantom, which run on a 28-CPU Linux cluster, and the associated software developed for the clinical implementation. MCRTV has an interface with a commercial treatment planning system (TPS) (Eclipse, Varian Medical Systems, Palo Alto, CA, USA) and reads the information needed for MC computation transferred in DICOM-RT format. The key features of MCRTV have been presented in detail in this paper. The phase-space data of our 15 MV photon beam from a Varian Clinac 2300C/D have been developed and several benchmarks have been performed under homogeneous and several inhomogeneous conditions (including water, aluminium, lung and bone media). The MC results agreed with the ionization chamber measurements to within 1% and 2% for homogeneous and inhomogeneous conditions, respectively. The MC calculation for a clinical prostate IMRT treatment plan validated the implementation of the beams and the patient/phantom configuration in MCRTV.

  1. Path integral Monte Carlo with importance sampling for excitons interacting with an arbitrary phonon bath.

    PubMed

    Shim, Sangwoo; Aspuru-Guzik, Alán

    2012-12-14

    The reduced density matrix of excitons coupled to a phonon bath at a finite temperature is studied using the path integral Monte Carlo method. Appropriate choices of estimators and importance sampling schemes are crucial to the performance of the Monte Carlo simulation. We show that by choosing the population-normalized estimator for the reduced density matrix, an efficient and physically-meaningful sampling function can be obtained. In addition, the nonadiabatic phonon probability density is obtained as a byproduct during the sampling procedure. For importance sampling, we adopted the Metropolis-adjusted Langevin algorithm. The analytic expression for the gradient of the target probability density function associated with the population-normalized estimator cannot be obtained in closed form without a matrix power series. An approximated gradient that can be efficiently calculated is explored to achieve better computational scaling and efficiency. Application to a simple one-dimensional model system from the previous literature confirms the correctness of the method developed in this manuscript. The displaced harmonic model system within the single exciton manifold shows the numerically exact temperature dependence of the coherence and population of the excitonic system. The sampling scheme can be applied to an arbitrary anharmonic environment, such as multichromophoric systems embedded in the protein complex. The result of this study is expected to stimulate further development of real time propagation methods that satisfy the detailed balance condition for exciton populations. PMID:23249075

  2. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    One of the most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In the present work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. The results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  3. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    SciTech Connect

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-11

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  4. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    DOE PAGESBeta

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-11

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less

  5. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  6. Integrated layout based Monte-Carlo simulation for design arc optimization

    NASA Astrophysics Data System (ADS)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  7. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  8. Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature.

    PubMed

    Filinov, V S; Fortov, V E; Bonitz, M; Moldabekov, Zh

    2015-03-01

    The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a/aB--the ratio of the mean interparticle distance to the Bohr radius--approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N=33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4, and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs≲1.5, both RPIMC and DPIMC are problematic due to the increased fermion sign problem. PMID:25871225

  9. Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature

    NASA Astrophysics Data System (ADS)

    Filinov, V. S.; Fortov, V. E.; Bonitz, M.; Moldabekov, Zh.

    2015-03-01

    The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a /aB —the ratio of the mean interparticle distance to the Bohr radius—approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N =33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4 , and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs≲1.5 , both RPIMC and DPIMC are problematic due to the increased fermion sign problem.

  10. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  11. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

    SciTech Connect

    Miller, S.G.

    1988-08-01

    Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

  12. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC

    NASA Astrophysics Data System (ADS)

    Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

    2011-02-01

    In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.

  13. The Monte Carlo Integration Computer as an Instructional Model for the Simulation of Equilibrium and Kinetic Chemical Processes: The Development and Evaluation of a Teaching Aid.

    ERIC Educational Resources Information Center

    Wood, Dean Arthur

    A special purpose digital computer which utilizes the Monte Carlo integration method of obtaining simulations of chemical processes was developed and constructed. The computer, designated as the Monte Carlo Integration Computer (MCIC), was designed as an instructional model for the illustration of kinetic and equilibrium processes, and was…

  14. Excitonic effects in 2D semiconductors: Path Integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Velizhanin, Kirill; Saxena, Avadh

    One of the most striking features of novel 2D semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multi-carrier bound states (e.g., excitons, trions and biexcitons), which could remain stable at near-room temperatures and contribute significantly to optical properties of such materials. In my talk, I will report on our recent progress in using the Path Integral Monte Carlo methodology to numerically study properties of multi-carrier bound states in 2D semiconductors. Incorporating the effect of the dielectric confinement (via Keldysh potential), we have investigated and tabulated the dependence of single exciton, trion and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. The implications of the obtained results and the possible limitations of the used model will be discussed. The results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  15. Monte Carlo simulation studies of lipid order parameter profiles near integral membrane proteins.

    PubMed Central

    Sperotto, M M; Mouritsen, O G

    1991-01-01

    Monte Carlo simulation techniques have been applied to a statistical mechanical lattice model in order to study the coherence length for the spatial fluctuations of the lipid order parameter profiles around integral membrane proteins in dipalmitoyl phosphatidylcholine bilayers. The model, which provides a detailed description of the pure lipid bilayer main transition, incorporates hydrophobic matching between the lipid and protein hydrophobic thicknesses as a major contribution to the lipid-protein interactions in lipid membranes. The model is studied at low protein-to-lipid ratios. The temperature dependence of the coherence length is found to have a dramatic peak at the phase transition temperature. The dependence on protein circumference as well as hydrophobic length is determined and it is concluded that in some cases the coherence length is much longer than previously anticipated. The long coherence length provides a mechanism for indirect lipid-mediated protein-protein long-range attraction and hence plays an important role in regulating protein segregation. Images FIGURE 5 FIGURE 6 PMID:2009352

  16. Path-Integral Monte Carlo and the Squeezed Trapped Bose-Einstein Gas

    SciTech Connect

    Fernandez, Juan Pablo; Mullin, William J.

    2006-09-07

    Bose-Einstein condensation has been experimentally found to take place in finite trapped systems when one of the confining frequencies is increased until the gas becomes effectively two-dimensional (2D). We confirm the plausibility of this result by performing path-integral Monte Carlo (PIMC) simulations of trapped Bose gases of increasing anisotropy and comparing them to the predictions of finite-temperature many-body theory. PIMC simulations provide an essentially exact description of these systems; they yield the density profile directly and provide two different estimates for the condensate fraction. For the ideal gas, we find that the PIMC column density of the squeezed gas corresponds quite accurately to that of the exact analytic solution and, moreover, is well mimicked by the density of a 2D gas at the same temperature; the two estimates for the condensate fraction bracket the exact result. For the interacting case, we find 2D Hartree-Fock solutions whose density profiles coincide quite well with the PIMC column densities and whose predictions for the condensate fraction are again bracketed by the PIMC estimates.

  17. Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.

    Energy Science and Technology Software Center (ESTSC)

    2012-11-30

    Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects onemore » of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  18. Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.

    SciTech Connect

    VALDEZ, GREG D.

    2012-11-30

    Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  19. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  20. Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series Codes for Stochastic-Media Simulations

    NASA Astrophysics Data System (ADS)

    Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

    2014-06-01

    Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

  1. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    SciTech Connect

    Hulett, David T.; Nosbisch, Michael R.

    2012-07-01

    This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds

  2. Conformational Transition Pathways Explored by Monte Carlo Simulation Integrated with Collective Modes

    PubMed Central

    Kantarci-Carsibasi, Nigar; Haliloglu, Turkan; Doruker, Pemra

    2008-01-01

    Conformational transitions between open/closed or free/bound states in proteins possess functional importance. We propose a technique in which the collective modes obtained from an anisotropic network model (ANM) are used in conjunction with a Monte Carlo (MC) simulation approach, to investigate conformational transition pathways and pathway intermediates. The ANM-MC technique is applied to adenylate kinase (AK) and hemoglobin. The iterative method, in which normal modes are continuously updated during the simulation, proves successful in accomplishing the transition between open-closed conformations of AK and tense-relaxed forms of hemoglobin (Cα− root mean square deviations between two end structures of 7.13 Å and 3.55 Å, respectively). Target conformations are reached by root mean-square deviations of 2.27 Å and 1.90 Å for AK and hemoglobin, respectively. The intermediate conformations overlap with crystal structures from the AK family within a 3.0-Å root mean-square deviation. In the case of hemoglobin, the transition of tense-to-relaxed passes through the relaxed state. In both cases, the lowest-frequency modes are effective during transitions. The targeted Monte Carlo approach is used without the application of collective modes. Both the ANM-MC and targeted Monte Carlo techniques can explore sequences of events in transition pathways with an efficient yet realistic conformational search. PMID:18676657

  3. Variational path integral molecular dynamics and hybrid Monte Carlo algorithms using a fourth order propagator with applications to molecular systems.

    PubMed

    Kamibayashi, Yuki; Miura, Shinichi

    2016-08-21

    In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction. PMID:27544094

  4. Mercedes–Benz water molecules near hydrophobic wall: Integral equation theories vs Monte Carlo simulations

    PubMed Central

    Urbic, T.; Holovko, M. F.

    2011-01-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

  5. Mercedes-Benz water molecules near hydrophobic wall: Integral equation theories vs Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Urbic, T.; Holovko, M. F.

    2011-10-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.

  6. Mercedes-Benz water molecules near hydrophobic wall: integral equation theories vs Monte Carlo simulations.

    PubMed

    Urbic, T; Holovko, M F

    2011-10-01

    Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

  7. Computational investigations of low-discrepancy point sets

    NASA Technical Reports Server (NTRS)

    Warnock, T. T.

    1971-01-01

    The quasi-Monte Carlo method of integration offers an attractive solution to the problem of evaluating integrals in a large number of dimensions; however, the associated error bounds are difficult to obtain theoretically. Since these bounds are associated with the L2 discrepancy of the set of points used in the integration. Numerical calculations of the L2 discrepancy for several types of quasi-Monte Carlo formulae are presented.

  8. Quantum effects in a free-standing graphene lattice: Path-integral against classical Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Brito, B. G. A.; Cândido, Ladir; Hai, G.-Q.; Peeters, F. M.

    2015-11-01

    In order to study quantum effects in a two-dimensional crystal lattice of a free-standing monolayer graphene, we have performed both path-integral Monte Carlo (PIMC) and classical Monte Carlo (MC) simulations for temperatures up to 2000 K. The REBO potential is used for the interatomic interaction. The total energy, interatomic distance, root-mean-square displacement of the atom vibrations, and the free energy of the graphene layer are calculated. The obtained lattice vibrational energy per atom from the classical MC simulation is very close to the energy of a three-dimensional harmonic oscillator 3 kBT . The PIMC simulation shows that quantum effects due to zero-point vibrations are significant for temperatures T <1000 K. The quantum contribution to the lattice vibrational energy becomes larger than that of the classical lattice for T <400 K. The lattice expansion due to the zero-point motion causes an increase of 0.53% in the lattice parameter. A minimum in the lattice parameter appears at T ≃500 K. Quantum effects on the atomic vibration amplitude of the graphene lattice and its free energy are investigated.

  9. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    SciTech Connect

    O'Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  10. Path-Integral Monte Carlo Study on a Droplet of a Dipolar Bose–Einstein Condensate Stabilized by Quantum Fluctuation

    NASA Astrophysics Data System (ADS)

    Saito, Hiroki

    2016-05-01

    Motivated by recent experiments [H. Kadau et al., Nature (London) 530, 194 (2016); I. Ferrier-Barbut et al., arXiv:1601.03318] and theoretical prediction (F. Wächtler and L. Santos, arXiv:1601.04501), the ground state of a dysprosium Bose-Einstein condensate with strong dipole-dipole interaction is studied by the path-integral Monte Carlo method. It is shown that quantum fluctuation can stabilize the condensate against dipolar collapse.

  11. Development of Path Integral Monte Carlo Simulations with Localized Nodal Surfaces for Second-Row Elements.

    PubMed

    Militzer, Burkhard; Driver, Kevin P

    2015-10-23

    We extend the applicability range of fermionic path integral Monte Carlo simulations to heavier elements and lower temperatures by introducing various localized nodal surfaces. Hartree-Fock nodes yield the most accurate prediction for pressure and internal energy, which we combine with the results from density functional molecular dynamics simulations to obtain a consistent equation of state for hot, dense silicon under plasma conditions and in the regime of warm dense matter (2.3-18.6  g cm(-3), 5.0×10(5)-1.3×10(8)  K). The shock Hugoniot curve is derived and the structure of the fluid is characterized with various pair correlation functions. PMID:26551129

  12. Integration of SimSET photon history generator in GATE for efficient Monte Carlo simulations of pinhole SPECT

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2008-01-01

    The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator∕detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to ∼10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques. PMID:18697552

  13. Monte Carlo ray-tracing simulations of luminescent solar concentrators for building integrated photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.

    2013-09-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.

  14. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  15. Finding linear dependencies in integration-by-parts equations: A Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Kant, Philipp

    2014-05-01

    The reduction of a large number of scalar integrals to a small set of master integrals via Laporta’s algorithm is common practice in multi-loop calculations. It is also a major bottleneck in terms of running time and memory consumption. It involves solving a large set of linear equations where many of the equations are linearly dependent. We propose a simple algorithm that eliminates all linearly dependent equations from a given system, reducing the time and space requirements of a subsequent run of Laporta’s algorithm.

  16. Path Integral Quantum Monte Carlo Study of Coupling and Proximity Effects in Superfluid Helium-4

    NASA Astrophysics Data System (ADS)

    Graves, Max T.

    When bulk helium-4 is cooled below T = 2.18 K, it undergoes a phase transition to a superfluid, characterized by a complex wave function with a macroscopic phase and exhibits inviscid, quantized flow. The macroscopic phase coherence can be probed in a container filled with helium-4, by reducing one or more of its dimensions until they are smaller than the coherence length, the spatial distance over which order propagates. As this dimensional reduction occurs, enhanced thermal and quantum fluctuations push the transition to the superfluid state to lower temperatures. However, this trend can be countered via the proximity effect, where a bulk 3-dimensional (3d) superfluid is coupled to a low (2d) dimensional superfluid via a weak link producing superfluid correlations in the film at temperatures above the Kosterlitz-Thouless temperature. Recent experiments probing the coupling between 3d and 2d superfluid helium-4 have uncovered an anomalously large proximity effect, leading to an enhanced superfluid density that cannot be explained using the correlation length alone. In this work, we have determined the origin of this enhanced proximity effect via large scale quantum Monte Carlo simulations of helium-4 in a topologically non-trivial geometry that incorporates the important aspects of the experiments. We find that due to the bosonic symmetry of helium-4, identical particle permutations lead to correlations between contiguous spatial regions at a length scale greater than the coherence length. We show that quantum exchange plays a large role in explaining the anomalous experimental results while simultaneously showing how classical arguments fall short of this task.

  17. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    SciTech Connect

    Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  18. Quantum partition functions of composite particles in a hydrogen-helium plasma via path integral Monte Carlo

    SciTech Connect

    Wendland, D.; Ballenegger, V.; Alastuey, A.

    2014-11-14

    We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H{sup −}, H{sub 2}{sup +}, and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities.

  19. Monte Carlo Simulations of Luminescent Solar Concentrators with Front-Facing Photovoltaic Cells for Building Integrated Photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin; Corrado, Carley; Osborn, Melissa; Carter, Sue

    2013-03-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles and concentrate the captured light on to small photo active areas. This enables LSCs to be integrated more extensively into buildings as windows and wall claddings on top of roof installations. LSCs with front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. It also allows for flexibility in determining the placement and percentage coverage of PV cells when designing panels to balance reabsorption losses, power output and the level of concentration desired. A Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels and aid in design optimization. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters. Interactions of photons with the LSC panel are determined by comparing calculated probabilities with random number generators. Simulation results reveal optimal panel dimensions and PV cell layouts to achieve maximum power output.

  20. Characterization and Monte Carlo simulation of single ion Geiger mode avalanche diodes integrated with a quantum dot nanostructure

    NASA Astrophysics Data System (ADS)

    Sharma, Peter; Abraham, J. B. S.; Ten Eyck, G.; Childs, K. D.; Bielejec, E.; Carroll, M. S.

    Detection of single ion implantation within a nanostructure is necessary for the high yield fabrication of implanted donor-based quantum computing architectures. Single ion Geiger mode avalanche (SIGMA) diodes with a laterally integrated nanostructure capable of forming a quantum dot were fabricated and characterized using photon pulses. The detection efficiency of this design was measured as a function of wavelength, lateral position, and for varying delay times between the photon pulse and the overbias detection window. Monte Carlo simulations based only on the random diffusion of photo-generated carriers and the geometrical placement of the avalanche region agrees qualitatively with device characterization. Based on these results, SIGMA detection efficiency appears to be determined solely by the diffusion of photo-generated electron-hole pairs into a buried avalanche region. Device performance is then highly dependent on the uniformity of the underlying silicon substrate and the proximity of photo-generated carriers to the silicon-silicon dioxide interface, which are the most important limiting factors for reaching the single ion detection limit with SIGMA detectors. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  1. Permutation blocking path integral Monte Carlo: a highly efficient approach to the simulation of strongly degenerate non-ideal fermions

    NASA Astrophysics Data System (ADS)

    Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael

    2015-07-01

    Correlated fermions are of high interest in condensed matter (Fermi liquids, Wigner molecules), cold atomic gases and dense plasmas. Here we propose a novel approach to path integral Monte Carlo (PIMC) simulations of strongly degenerate non-ideal fermions at finite temperature by combining a fourth-order factorization of the density matrix with antisymmetric propagators, i.e., determinants, between all imaginary time slices. To efficiently run through the modified configuration space, we introduce a modification of the widely used continuous space worm algorithm, which allows for an efficient sampling at arbitrary system parameters. We demonstrate how the application of determinants achieves an effective blocking of permutations with opposite signs, leading to a significant relieve of the fermion sign problem. To benchmark the capability of our method regarding the simulation of degenerate fermions, we consider multiple electrons in a quantum dot and compare our results with other ab initio techniques, where they are available. The present permutation blocking PIMC approach allows us to obtain accurate results even for N = 20 electrons at low temperature and arbitrary coupling, where no other ab initio results have been reported, so far.

  2. Estimation of optical properties of neuroendocrine pancreas tumor with double-integrating-sphere system and inverse Monte Carlo model.

    PubMed

    Saccomandi, Paola; Larocca, Enza Stefania; Rendina, Veneranda; Schena, Emiliano; D'Ambrosio, Roberto; Crescenzi, Anna; Di Matteo, Francesco Maria; Silvestri, Sergio

    2016-08-01

    The investigation of laser-tissue interaction is crucial for diagnostics and therapeutics. In particular, the estimation of tissue optical properties allows developing predictive models for defining organ-specific treatment planning tool. With regard to laser ablation (LA), optical properties are among the main responsible for the therapy efficacy, as they globally affect the heating process of the tissue, due to its capability to absorb and scatter laser energy. The recent introduction of LA for pancreatic tumor treatment in clinical studies has fostered the need to assess the laser-pancreas interaction and hence to find its optical properties in the wavelength of interest. This work aims at estimating optical properties (i.e., absorption, μ a , scattering, μ s , anisotropy, g, coefficients) of neuroendocrine pancreas tumor at 1064 nm. Experiments were performed using two popular sample storage methods; the optical properties of frozen and paraffin-embedded neuroendocrine tumor of the pancreas are estimated by employing a double-integrating-sphere system and inverse Monte Carlo algorithm. Results show that paraffin-embedded tissue is characterized by absorption and scattering coefficients significantly higher than frozen samples (μ a of 56 cm(-1) vs 0.9 cm(-1), μ s of 539 cm(-1) vs 130 cm(-1), respectively). Simulations show that such different optical features strongly influence the pancreas temperature distribution during LA. This result may affect the prediction of therapeutic outcome. Therefore, the choice of the appropriate preparation technique of samples for optical property estimation is crucial for the performances of the mathematical models which predict LA thermal outcome on the tissue and lead the selection of optimal LA settings. PMID:27147075

  3. Path Integral Monte Carlo Simulations of Solid Molecular Hydrogen Surfaces and Thin HELIUM-4 Films on Molecular Hydrogen Substrates

    NASA Astrophysics Data System (ADS)

    Wagner, Marcus

    Based on Richard P. Feynman's formulation of quantum mechanics, Path Integral Monte Carlo is a computational ab-initio method to calculate finite temperature equilibrium properties of quantum many-body systems. As input, only fundamental physical constants and pair-potentials are required. We carry out the first ab-initio particle simulations of three related physical systems. First, the bare H _2 substrate is simulated between 0.5 and 1.3K, because a liquid H_2 film is a candidate for a new superfluid. We find evidence of quantum exchange in surface terraces for up to 1K. Second, the melting of the H_2 surface between 3 and 15K is examined since this is the cleanest example of quantum surface melting. Third, atomically thin superfluid ^4He films on H_2 surfaces are simulated, calculating binding energies per ^4He atom and third sound, an important experimental probe for superfuid ^4 He films. For all systems we compute density profiles perpendicular and parallel to the surface and compare to experiment. We treat both H_2 molecules and ^4He atoms on the same footing, as spherical particles. For simulations of bulk/vapor interfaces and surface adsorption, a realistic representation of the macroscopic surface is crucial. Therefore, we introduce an external potential to account for arbitrarily layered substrates and long-range corrections. Two algorithms for parallel computers with independent processors are introduced, one to manage concurrent simulations of entire phase-diagrams, and one to improve input/output speed for files shared by all processors.

  4. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials.

    PubMed

    Lindoy, Lachlan P; Kolmann, Stephen J; D'Arcy, Jordan H; Crittenden, Deborah L; Jordan, Meredith J T

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H2-Li(+)-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li(+)-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling-coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li(+)-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol(-1), respectively. PMID:26590532

  5. New integrated Monte Carlo code for the simulation of high-resolution scanning electron microscopy images for metrology in microlithography

    NASA Astrophysics Data System (ADS)

    Ilgüsatiroglu, Emre; Illarionov, Alexey Yu.; Ciappa, Mauro; Pfäffli, Paul; Bomholt, Lars

    2014-04-01

    A new Monte Carlo code is presented that includes among others definition of arbitrary geometries with sub-nanometer resolution, high performance parallel computing capabilities, trapped charge, electric field calculation, electron tracking in electrostatic field, and calculation of 3D dose distributions. These functionalities are efficiently implemented thanks to the coupling of the Monte Carlo simulator with a TCAD environment. Applications shown are the synthesis of SEM linescans and images that focus on the evaluation of the impact of proximity effects and self charging on the quantitative extraction of critical dimensions in dense photoresist structures.

  6. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing. PMID:26249663

  7. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    NASA Astrophysics Data System (ADS)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  8. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  9. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  10. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions.

    PubMed

    Yan, Yangqian; Blume, D

    2016-06-10

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. PMID:27341213

  11. Integration of a particle-particle-particle-mesh algorithm with the ensemble Monte Carlo method for the simulation of ultra-small semiconductor devices

    SciTech Connect

    Wordelman, C.J.; Ravaioli, U.

    2000-02-01

    A particle-particle-particle-mesh (P{sup 3}M) algorithm is integrated with the ensemble Monte Carlo (EMC) method for the treatment of carrier-impurity (c-i) and carrier-carrier (c-c) effects in semiconductor device simulation. Ionized impurities and charge carriers are treated granularly as opposed to the normal continuum methods and c-i and c-c interactions are calculated in three dimensions. The combined P{sup 3}M-EMC method follows the approach of Hockney, but is modified to treat nonuniform rectilinear meshes with arbitrary boundary conditions. Bulk mobility results are obtained for a three-dimensional (3-D) resistor and are compared with previously reported experimental and numerical results.

  12. Path integral Monte Carlo determination of the fourth-order virial coefficient for unitary two-component Fermi gas with zero-range interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-05-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astro physics. This work determines the fourth-order virial coefficient b4 of such a strongly-interacting Fermi gas using a customized ab inito path integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4, our b4 agrees with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly anti-symmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. We gratefully acknowledge support by the NSF.

  13. Novel Monte Carlo approach quantifies data assemblage utility and reveals power of integrating molecular and clinical information for cancer prognosis

    PubMed Central

    Verleyen, Wim; Langdon, Simon P.; Faratian, Dana; Harrison, David J.; Smith, V. Anne

    2015-01-01

    Current clinical practice in cancer stratifies patients based on tumour histology to determine prognosis. Molecular profiling has been hailed as the path towards personalised care, but molecular data are still typically analysed independently of known clinical information. Conventional clinical and histopathological data, if used, are added only to improve a molecular prediction, placing a high burden upon molecular data to be informative in isolation. Here, we develop a novel Monte Carlo analysis to evaluate the usefulness of data assemblages. We applied our analysis to varying assemblages of clinical data and molecular data in an ovarian cancer dataset, evaluating their ability to discriminate one-year progression-free survival (PFS) and three-year overall survival (OS). We found that Cox proportional hazard regression models based on both data types together provided greater discriminative ability than either alone. In particular, we show that proteomics data assemblages that alone were uninformative (p = 0.245 for PFS, p = 0.526 for OS) became informative when combined with clinical information (p = 0.022 for PFS, p = 0.048 for OS). Thus, concurrent analysis of clinical and molecular data enables exploitation of prognosis-relevant information that may not be accessible from independent analysis of these data types. PMID:26503707

  14. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  15. Integration and evaluation of automated Monte Carlo simulations in the clinical practice of scanned proton and carbon ion beam therapy

    NASA Astrophysics Data System (ADS)

    Bauer, J.; Sommerer, F.; Mairani, A.; Unholtz, D.; Farook, R.; Handrack, J.; Frey, K.; Marcelos, T.; Tessonnier, T.; Ecker, S.; Ackermann, B.; Ellerbrock, M.; Debus, J.; Parodi, K.

    2014-08-01

    Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in

  16. An integrated Markov chain Monte Carlo algorithm for upscaling hydrological and geochemical parameters from column to field scale.

    PubMed

    Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T

    2015-04-15

    Predicting and controlling the concentrations of redox-sensitive elements are primary concerns for environmental remediation of contaminated sites. These predictions are complicated by dynamic flow processes as hydrologic variability is a governing control on conservative and reactive chemical concentrations. Subsurface heterogeneity in the form of layers and lenses further complicates the flow dynamics of the system impacting chemical concentrations including redox-sensitive elements. In response to these complexities, this study investigates the role of heterogeneity and hydrologic processes in an effective parameter upscaling scheme from the column to the landfill scale. We used a Markov chain Monte Carlo (MCMC) algorithm to derive upscaling coefficients for hydrological and geochemical parameters, which were tested for variations across heterogeneous systems (layers and lenses) and interaction of flow processes based on the output uncertainty of dominant biogeochemical concentrations at the Norman Landfill site, a closed municipal landfill with prevalent organic and trace metal contamination. The results from MCMC analysis indicated that geochemical upscaling coefficients based on effective concentration ratios incorporating local heterogeneity across layered and lensed systems produced better estimates of redox-sensitive biogeochemistry at the field scale. MCMC analysis also suggested that inclusion of hydrological parameters in the upscaling scheme reduced the output uncertainty of effective mean geochemical concentrations by orders of magnitude at the Norman Landfill site. This was further confirmed by posterior density plots of the scaling coefficients that revealed unimodal characteristics when only geochemical processes were involved, but produced multimodal distributions when hydrological parameters were included. The multimodality again suggests the effect of heterogeneity and lithologic variability on the distribution of redox-sensitive elements at the

  17. A Monte Carlo Approach to Modeling the Breakup of the Space Launch System EM-1 Core Stage with an Integrated Blast and Fragment Catalogue

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Hays, M. J.; Blackwood, J. M.; Skinner, T.

    2014-01-01

    The Liquid Propellant Fragment Overpressure Acceleration Model (L-FOAM) is a tool developed by Bangham Engineering Incorporated (BEi) that produces a representative debris cloud from an exploding liquid-propellant launch vehicle. Here it is applied to the Core Stage (CS) of the National Aeronautics and Space Administration (NASA) Space Launch System (SLS launch vehicle). A combination of Probability Density Functions (PDF) based on empirical data from rocket accidents and applicable tests, as well as SLS specific geometry are combined in a MATLAB script to create unique fragment catalogues each time L-FOAM is run-tailored for a Monte Carlo approach for risk analysis. By accelerating the debris catalogue with the BEi blast model for liquid hydrogen / liquid oxygen explosions, the result is a fully integrated code that models the destruction of the CS at a given point in its trajectory and generates hundreds of individual fragment catalogues with initial imparted velocities. The BEi blast model provides the blast size (radius) and strength (overpressure) as probabilities based on empirical data and anchored with analytical work. The coupling of the L-FOAM catalogue with the BEi blast model is validated with a simulation of the Project PYRO S-IV destruct test. When running a Monte Carlo simulation, L-FOAM can accelerate all catalogues with the same blast (mean blast, 2 s blast, etc.), or vary the blast size and strength based on their respective probabilities. L-FOAM then propagates these fragments until impact with the earth. Results from L-FOAM include a description of each fragment (dimensions, weight, ballistic coefficient, type and initial location on the rocket), imparted velocity from the blast, and impact data depending on user desired application. LFOAM application is for both near-field (fragment impact to escaping crew capsule) and far-field (fragment ground impact footprint) safety considerations. The user is thus able to use statistics from a Monte Carlo

  18. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  19. Path integral Monte Carlo simulations of H{sub 2} adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    SciTech Connect

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.

  20. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  1. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies. PMID:25260856

  2. PREFACE: Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé (San Diego, CA, USA, February 27-March 3 2011) Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé (San Diego, CA, USA, February 27-March 3 2011)

    NASA Astrophysics Data System (ADS)

    Lebensohn, Ricardo A.

    2012-03-01

    This special issue contains selected contributions from invited speakers to the 'Polycrystal Modelling with Experimental Integration: A Symposium Honoring Carlos Tomé', held as part of the 2011 TMS Annual Meeting and Exhibition, that took place on February 27-March 3, 2011 in San Diego, CA, USA. This symposium honored the remarkable contributions of Dr Carlos N Tomé to the field of mechanical behavior of polycrystalline materials, on the occasion of his 60th birthday. Throughout his career, Dr Tomé has pioneered the theoretical and numerical development of models of polycrystal mechanical behavior, with emphasis on the role played by texture and microstructure on the anisotropic behavior of engineering materials. His many contributions have been critical in establishing a strong connection between models and experiments, and in bridging different scales in the pursuit of robust multiscale models with experimental integration. Among his achievements, the numerical codes that Dr Tomé and co-workers have developed are extensively used in the materials science and engineering community as predictive tools for parameter identification, interpretation of experiments, and multiscale calculations in academia, national laboratories and industry. The symposium brought together materials scientists and engineers to address current theoretical, computational and experimental issues related to microstructure-property relationships in polycrystalline materials deforming in different regimes, including the effects of single crystal anisotropy, texture and microstructure evolution. Synergetic studies, involving different crystal plasticity-based models, including multiscale implementations of the latter, and measurements of global and local textures, internal strains, dislocation structures, twinning, phase distribution, etc, were discussed in more than 90 presentations. The papers in this issue are representative of the different length-scales, materials, and experimental and

  3. Determination of Component Contents of Blend Oil Based on Characteristics Peak Value Integration.

    PubMed

    Xu, Jing; Hou, Pei-guo; Wang, Yu-tian; Pan, Zhao

    2016-01-01

    Edible blend oil market is confused at present. It has some problems such as confusing concepts, randomly named, shoddy and especially the fuzzy standard of compositions and ratios in blend oil. The national standard fails to come on time after eight years. The basic reason is the lack of qualitative and quantitative detection of vegetable oils in blend oil. Edible blend oil is mixed by different vegetable oils according to a certain proportion. Its nutrition is rich. Blend oil is eaten frequently in daily life. Different vegetable oil contains a certain components. The mixed vegetable oil can make full use of their nutrients and make the nutrients more balanced in blend oil. It is conducive to people's health. It is an effectively way to monitor blend oil market by the accurate determination of single vegetable oil content in blend oil. The types of blend oil are known, so we only need for accurate determination of its content. Three dimensional fluorescence spectra are used for the contents in blend oil. A new method of data processing is proposed with calculation of characteristics peak value integration in chosen characteristic area based on Quasi-Monte Carlo method, combined with Neural network method to solve nonlinear equations to obtain single vegetable oil content in blend oil. Peanut oil, soybean oil and sunflower oil are used as research object to reconcile into edible blend oil, with single oil regarded whole, not considered each oil's components. Recovery rates of 10 configurations of edible harmonic oil is measured to verify the validity of the method of characteristics peak value integration. An effective method is provided to detect components content of complex mixture in high sensitivity. Accuracy of recovery rats is increased, compared the common method of solution of linear equations used to detect components content of mixture. It can be used in the testing of kinds and content of edible vegetable oil in blend oil for the food quality detection

  4. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  5. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  6. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  7. Shell model Monte Carlo methods

    SciTech Connect

    Koonin, S.E.; Dean, D.J.

    1996-10-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  8. Efficient methods for including quantum effects in Monte Carlo calculations of large systems: Extension of the displaced points path integral method and other effective potential methods to calculate properties and distributions

    NASA Astrophysics Data System (ADS)

    Mielke, Steven L.; Dinpajooh, Mohammadhasan; Siepmann, J. Ilja; Truhlar, Donald G.

    2013-01-01

    We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.

  9. Direct Comparisons among Fast Off-Lattice Monte Carlo Simulations, Integral Equation Theories, and Gaussian Fluctuation Theory for Disordered Symmetric Diblock Copolymers

    NASA Astrophysics Data System (ADS)

    Yang, Delian; Zong, Jing; Wang, Qiang

    2012-02-01

    Based on the same model system of symmetric diblock copolymers as discrete Gaussian chains with soft, finite-range repulsions as commonly used in dissipative-particle dynamics simulations, we directly compare, without any parameter-fitting, the thermodynamic and structural properties of the disordered phase obtained from fast off-lattice Monte Carlo (FOMC) simulations^1, reference interaction site model (RISM) and polymer reference interaction site model (PRISM) theories, and Gaussian fluctuation theory. The disordered phase ranges from homopolymer melts (i.e., where the Flory-Huggins parameter χ=0) all the way to the order-disorder transition point determined in FOMC simulations, and the compared quantities include the internal energy, entropy, Helmholtz free energy, excess pressure, constant-volume heat capacity, chain/block dimensions, and various structure factors and correlation functions in the system. Our comparisons unambiguously and quantitatively reveal the consequences of various theoretical approximations and the validity of these theories in describing the fluctuations/correlations in disordered diblock copolymers. [1] Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).

  10. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  11. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  12. Compressible generalized hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.

    2014-05-01

    One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.

  13. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  14. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  15. Susceptibility of clinical isolates of Pseudomonas aeruginosa in the Northern Kyushu district of Japan to carbapenem antibiotics, determined by an integrated concentration method: evaluation of the method based on Monte Carlo simulation.

    PubMed

    Nagasawa, Zenzo; Kusaba, Koji; Aoki, Yosuke

    2008-06-01

    In empirical antibacterial therapy, regional surveillance is expected to yield important information for the determination of the class and dosage regimen of antibacterial agents to be used when dealing with infections with organisms such as Pseudomonas aeruginosa, in which strains resistant to antibacterial agents have been increasing. The minimal inhibitory concentrations (MICs) of five carbapenem antibiotics against P. aeruginosa strains isolated in the Northern Kyushu district of Japan between 2005 and 2006 were measured, and 100 strains for which carbapenem MICs were < or =0.5-32 microg/ml were selected. In this study, MIC was measured by two methods, i.e., the common serial twofold dilution method and an integrated concentration method, in which the concentration was changed, in increments of 2 microg/ml, from 2 to 16 microg/ml. The MIC(50)/MIC(90) values for imipenem, meropenem, biapenem, doripenem, and panipenem, respectively, with the former method were 8/16, 4/16, 4/16, 2/8, and 16/16 microg/ml; and the values were 6/10, 4/12, 4/10, 2/6, and 10/16 microg/ml with the latter method. The MIC data obtained with both methods were subjected to pharmacokinetic/pharmacodynamic (PK/PD) analysis with Monte Carlo simulation to calculate the probability of achieving the target of time above MIC (T>MIC) with each carbapenem. The probability of achieving 25% time above the MIC (T>MIC; % of T>MIC for dosing intervals) and 40% T>MIC against P. aeruginosa with any dosage regimen was higher with doripenem than with any other carbapenem tested. When the two sets of MIC data were subjected to PK/PD analysis, the difference between the two methods in the probability of achieving each % T>MIC was small, thus endorsing the validity of the serial twofold dilution method. PMID:18574662

  16. Quasi-Random Sequence Generators.

    Energy Science and Technology Software Center (ESTSC)

    1994-03-01

    Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.

  17. Monte Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Dytman, Steven

    2011-10-01

    Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

  18. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  19. Monte Carlo portal dosimetry

    SciTech Connect

    Chin, P.W. . E-mail: mary.chin@physics.org

    2005-10-15

    This project developed a solution for verifying external photon beam radiotherapy. The solution is based on a calibration chain for deriving portal dose maps from acquired portal images, and a calculation framework for predicting portal dose maps. Quantitative comparison between acquired and predicted portal dose maps accomplishes both geometric (patient positioning with respect to the beam) and dosimetric (two-dimensional fluence distribution of the beam) verifications. A disagreement would indicate that beam delivery had not been according to plan. The solution addresses the clinical need for verifying radiotherapy both pretreatment (without the patient in the beam) and on treatment (with the patient in the beam). Medical linear accelerators mounted with electronic portal imaging devices (EPIDs) were used to acquire portal images. Two types of EPIDs were investigated: the amorphous silicon (a-Si) and the scanning liquid ion chamber (SLIC). The EGSnrc family of Monte Carlo codes were used to predict portal dose maps by computer simulation of radiation transport in the beam-phantom-EPID configuration. Monte Carlo simulations have been implemented on several levels of high throughput computing (HTC), including the grid, to reduce computation time. The solution has been tested across the entire clinical range of gantry angle, beam size (5 cmx5 cm to 20 cmx20 cm), and beam-patient and patient-EPID separations (4 to 38 cm). In these tests of known beam-phantom-EPID configurations, agreement between acquired and predicted portal dose profiles was consistently within 2% of the central axis value. This Monte Carlo portal dosimetry solution therefore achieved combined versatility, accuracy, and speed not readily achievable by other techniques.

  20. Quantum Monte Carlo calculations for carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Luu, Thomas; Lähde, Timo A.

    2016-04-01

    We show how lattice quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.

  1. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  2. Quantum Monte Carlo simulations of tunneling in quantum adiabatic optimization

    NASA Astrophysics Data System (ADS)

    Brady, Lucas T.; van Dam, Wim

    2016-03-01

    We explore to what extent path-integral quantum Monte Carlo methods can efficiently simulate quantum adiabatic optimization algorithms during a quantum tunneling process. Specifically we look at symmetric cost functions defined over n bits with a single potential barrier that a successful quantum adiabatic optimization algorithm will have to tunnel through. The height and width of this barrier depend on n , and by tuning these dependencies, we can make the optimization algorithm succeed or fail in polynomial time. In this article we compare the strength of quantum adiabatic tunneling with that of path-integral quantum Monte Carlo methods. We find numerical evidence that quantum Monte Carlo algorithms will succeed in the same regimes where quantum adiabatic optimization succeeds.

  3. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  4. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  5. Quantum Monte Carlo simulation with a black hole

    NASA Astrophysics Data System (ADS)

    Benić, Sanjin; Yamamoto, Arata

    2016-05-01

    We perform quantum Monte Carlo simulations in the background of a classical black hole. The lattice discretized path integral is numerically calculated in the Schwarzschild metric and in its approximated metric. We study spontaneous symmetry breaking of a real scalar field theory. We observe inhomogeneous symmetry breaking induced by an inhomogeneous gravitational field.

  6. A separable shadow Hamiltonian hybrid Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

    2009-11-01

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  7. Innovation Lecture Series - Carlos Dominguez

    NASA Video Gallery

    Carlos Dominguez is a Senior Vice President at Cisco Systems and a technology evangelist, speaking to and motivating audiences worldwide about how technology is changing how we communicate, collabo...

  8. Isotropic Monte Carlo Grain Growth

    Energy Science and Technology Software Center (ESTSC)

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  9. Carlos Chagas: biographical sketch.

    PubMed

    Moncayo, Alvaro

    2010-01-01

    Carlos Chagas was born on 9 July 1878 in the farm "Bon Retiro" located close to the City of Oliveira in the interior of the State of Minas Gerais, Brazil. He started his medical studies in 1897 at the School of Medicine of Rio de Janeiro. In the late XIX century, the works by Louis Pasteur and Robert Koch induced a change in the medical paradigm with emphasis in experimental demonstrations of the causal link between microbes and disease. During the same years in Germany appeared the pathological concept of disease, linking organic lesions with symptoms. All these innovations were adopted by the reforms of the medical schools in Brazil and influenced the scientific formation of Chagas. Chagas completed his medical studies between 1897 and 1903 and his examinations during these years were always ranked with high grades. Oswaldo Cruz accepted Chagas as a doctoral candidate and directed his thesis on "Hematological studies of Malaria" which was received with honors by the examiners. In 1903 the director appointed Chagas as research assistant at the Institute. In those years, the Institute of Manguinhos, under the direction of Oswaldo Cruz, initiated a process of institutional growth and gathered a distinguished group of Brazilian and foreign scientists. In 1907, he was requested to investigate and control a malaria outbreak in Lassance, Minas Gerais. In this moment Chagas could not have imagined that this field research was the beginning of one of the most notable medical discoveries. Chagas was, at the age of 28, a Research Assistant at the Institute of Manguinhos and was studying a new flagellate parasite isolated from triatomine insects captured in the State of Minas Gerais. Chagas made his discoveries in this order: first the causal agent, then the vector and finally the human cases. These notable discoveries were carried out by Chagas in twenty months. At the age of 33 Chagas had completed his discoveries and published the scientific articles that gave him world

  10. Angular biasing in implicit Monte-Carlo

    SciTech Connect

    Zimmerman, G.B.

    1994-10-20

    Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

  11. Modelling cerebral blood oxygenation using Monte Carlo XYZ-PA

    NASA Astrophysics Data System (ADS)

    Zam, Azhar; Jacques, Steven L.; Alexandrov, Sergey; Li, Youzhi; Leahy, Martin J.

    2013-02-01

    Continuous monitoring of cerebral blood oxygenation is critically important for the management of many lifethreatening conditions. Non-invasive monitoring of cerebral blood oxygenation with a photoacoustic technique offers advantages over current invasive and non-invasive methods. We introduce a Monte Carlo XYZ-PA to model the energy deposition in 3D and the time-resolved pressures and velocity potential based on the energy absorbed by the biological tissue. This paper outlines the benefits of using Monte Carlo XYZ-PA for optimization of photoacoustic measurement and imaging. To the best of our knowledge this is the first fully integrated tool for photoacoustic modelling.

  12. Application of biasing techniques to the contributon Monte Carlo method

    SciTech Connect

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  13. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  14. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  15. Monte Carlo calculations of nuclei

    SciTech Connect

    Pieper, S.C.

    1997-10-01

    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  16. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  17. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  18. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  19. Calculations of pair production by Monte Carlo methods

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.

  20. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  1. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  2. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  3. The use of the inverse Monte Carlo method in nuclear engineering

    SciTech Connect

    Dunn, W.L.

    1988-01-01

    The inverse Monte Carlo (IMC) method was introduced in 1981 in an attempt to apply Monte Carlo to the solution of inverse problems. It was argued that if direct Monte Carlo could be used to estimate expected values, which in the continuous case assume the form of definite integrals, then perhaps a variant could be used to solve inverse problems of the type that are posed as integral equations. The IMC method actually converts the inverse problem, through a noniterative simulation technique, into a system of algebraic equations that can be solved by standard analytical or numerical techniques. The principal merits of IMC are that, like direct Monte Carlo, the method can be applied to complex and multivariable problems, and variance reduction procedures can be applied.

  4. Effective discrepancy and numerical experiments

    NASA Astrophysics Data System (ADS)

    Varet, Suzanne; Lefebvre, Sidonie; Durand, Gérard; Roblin, Antoine; Cohen, Serge

    2012-12-01

    Many problems require the computation of a high dimensional integral, typically with a few tens of input factors, with a low number of integrand evaluations. To avoid the curse of dimensionality, we reduce the dimension before applying the Quasi-Monte Carlo method. We will show how to reduce the dimension by computing approximate Sobol indices of the variables with a two-levels fractional factorial design. Then, we will use the Sobol indices to define the effective discrepancy, which turns out to be correlated with the QMC error and thus enables one to choose a good sequence for the integral estimation.

  5. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  6. Genetic algorithms: An evolution from Monte Carlo Methods for strongly non-linear geophysical optimization problems

    NASA Astrophysics Data System (ADS)

    Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy

    In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.

  7. Monte Carlo calculation of monitor unit for electron arc therapy

    SciTech Connect

    Chow, James C. L.; Jiang Runqing

    2010-04-15

    for electron arc therapy. Since Monte Carlo simulations can generate a precalculated database of ROF, SSD offset, and DF for the MU calculation, with a reduction in human effort and linac beam-on time, it is recommended that Monte Carlo simulations be partially or completely integrated into the commissioning of electron arc therapy.

  8. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  9. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  10. Monte Carlo modelling of TRIGA research reactor

    NASA Astrophysics Data System (ADS)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  11. Multiple-time-stepping generalized hybrid Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  12. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  13. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  14. Monte Carlo methods: Application to hydrogen gas and hard spheres

    NASA Astrophysics Data System (ADS)

    Dewing, Mark Douglas

    2001-08-01

    Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied to a system of boson hard spheres to get exact, infinite system size results for the ground state at several densities. The kinds of problems that can be simulated with Monte Carlo methods are expanded through the development of new algorithms for combining a QMC simulation with a classical Monte Carlo simulation, which we call Coupled Electronic-Ionic Monte Carlo (CEIMC). The new CEIMC method is applied to a system of molecular hydrogen at temperatures ranging from 2800K to 4500K and densities from 0.25 to 0.46 g/cm3. VMC requires optimizing a parameterized wave function to find the minimum energy. We examine several techniques for optimizing VMC wave functions, focusing on the ability to optimize parameters appearing in the Slater determinant. Classical Monte Carlo simulations use an empirical interatomic potential to compute equilibrium properties of various states of matter. The CEIMC method replaces the empirical potential with a QMC calculation of the electronic energy. This is similar in spirit to the Car-Parrinello technique, which uses Density Functional Theory for the electrons and molecular dynamics for the nuclei. The challenges in constructing an efficient CEIMC simulation center mostly around the noisy results generated from the QMC computations of the electronic energy. We introduce two complementary techniques, one for tolerating the noise and the other for reducing it. The penalty method modifies the Metropolis acceptance ratio to tolerate noise without introducing a bias in the simulation of the nuclei. For reducing the noise, we introduce the two-sided energy

  15. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  16. Multidimensional stochastic approximation Monte Carlo.

    PubMed

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  17. Monte Carlo surface flux tallies

    SciTech Connect

    Favorite, Jeffrey A

    2010-11-19

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  18. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  19. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    SciTech Connect

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  20. A hybrid transport-diffusion Monte Carlo method for frequency-dependent radiative-transfer simulations

    SciTech Connect

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2012-08-15

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.

  1. Path Integral Simulations of Graphene

    NASA Astrophysics Data System (ADS)

    Yousif, Hosam

    2007-10-01

    Some properties of graphene are explored using a path integral approach. The path integral method allows us to simulate relatively large systems using monte carlo techniques and extract thermodynamic quantities. We simulate the effects of screening a large external charge potential, as well as conductivity and charge distributions in graphene sheets.

  2. Use of GEANE for tracking in virtual Monte Carlo

    NASA Astrophysics Data System (ADS)

    Fontana, A.; Genova, P.; Lavezzi, L.; Panzarasa, A.; Rotondi, A.; A-Turany, M.; Bertini, D.

    2008-07-01

    The concept of Virtual Monte Carlo (VMC) allows to use different Monte Carlo programs to simulate particle physics detectors without changing the geometry definition and the detector response simulation. In this context, to study the reconstruction capabilities of a detector, the availability of a tool to extrapolate the track parameters and their associated errors due to magnetic field, straggling in energy loss and Coulomb multiple scattering plays a central role: GEANE is an old program written in Fortran 15 years ago that performs this task through dense materials and that is still succesfully used by many modern experiments in its native form. Among its features there is the capability to read directly the geometry and the magnetic field map from the simulation and to use different track representations. In this work we have 'rediscovered' GEANE in the context of the Virtual Monte Carlo: we will show how GEANE has been integrated in the FairROOT framework, firmly based on the VMC, by keeping the old features in the new ROOT geometry modeler. Moreover new features have been added to GEANE that allow one to use it also for low density materials, i.e. for gaseous detectors, and preliminary results will be shown and discussed. The tool is now used by the PANDA and CBM collaborations at GSI as the first step for the global reconstruction algorithms, based on a Kalman filter which is currently under development.

  3. A Monte Carlo multimodal inversion of surface waves

    NASA Astrophysics Data System (ADS)

    Maraschini, Margherita; Foti, Sebastiano

    2010-09-01

    The analysis of surface wave propagation is often used to estimate the S-wave velocity profile at a site. In this paper, we propose a stochastic approach for the inversion of surface waves, which allows apparent dispersion curves to be inverted. The inversion method is based on the integrated use of two-misfit functions. A misfit function based on the determinant of the Haskell-Thomson matrix and a classical Euclidean distance between the dispersion curves. The former allows all the modes of the dispersion curve to be taken into account with a very limited computational cost because it avoids the explicit calculation of the dispersion curve for each tentative model. It is used in a Monte Carlo inversion with a large population of profiles. In a subsequent step, the selection of representative models is obtained by applying a Fisher test based on the Euclidean distance between the experimental and the synthetic dispersion curves to the best models of the Monte Carlo inversion. This procedure allows the set of the selected models to be identified on the basis of the data quality. It also mitigates the influence of local minima that can affect the Monte Carlo results. The effectiveness of the procedure is shown for synthetic and real experimental data sets, where the advantages of the two-stage procedure are highlighted. In particular, the determinant misfit allows the computation of large populations in stochastic algorithms with a limited computational cost.

  4. Quantitative PET Imaging Using A Comprehensive Monte Carlo System Model

    SciTech Connect

    Southekal, S.; Vaska, P.; Southekal, s.; Purschke, M.L.; Schlyer, d.J.; Vaska, P.

    2011-10-01

    We present the complete image generation methodology developed for the RatCAP PET scanner, which can be extended to other PET systems for which a Monte Carlo-based system model is feasible. The miniature RatCAP presents a unique set of advantages as well as challenges for image processing, and a combination of conventional methods and novel ideas developed specifically for this tomograph have been implemented. The crux of our approach is a low-noise Monte Carlo-generated probability matrix with integrated corrections for all physical effects that impact PET image quality. The generation and optimization of this matrix are discussed in detail, along with the estimation of correction factors and their incorporation into the reconstruction framework. Phantom studies and Monte Carlo simulations are used to evaluate the reconstruction as well as individual corrections for random coincidences, photon scatter, attenuation, and detector efficiency variations in terms of bias and noise. Finally, a realistic rat brain phantom study reconstructed using this methodology is shown to recover >; 90% of the contrast for hot as well as cold regions. The goal has been to realize the potential of quantitative neuroreceptor imaging with the RatCAP.

  5. Burnup calculation methodology in the serpent 2 Monte Carlo code

    SciTech Connect

    Leppaenen, J.; Isotalo, A.

    2012-07-01

    This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)

  6. Experimental validation of plutonium ageing by Monte Carlo correlated sampling

    SciTech Connect

    Litaize, O.; Bernard, D.; Santamarina, A.

    2006-07-01

    Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)

  7. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and

  8. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  9. Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications

    SciTech Connect

    Rising, Michael Evan

    2015-11-03

    These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).

  10. Monte carlo sampling of fission multiplicity.

    SciTech Connect

    Hendricks, J. S.

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  11. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  12. Monte Carlo Ion Transport Analysis Code.

    Energy Science and Technology Software Center (ESTSC)

    2009-04-15

    Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.

  13. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  14. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  15. The ATLAS Fast Monte Carlo Production Chain Project

    NASA Astrophysics Data System (ADS)

    Jansky, Roland

    2015-12-01

    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track reconstruction, a Monte Carlo generator information based trajectory building has been deployed with the aim of bypassing the CPU intensive pattern recognition. Together with the ISF all components have been integrated into a new fast MC production chain, aiming to produce fast MC simulated data with sufficient agreement with fully simulated and reconstructed data at a processing time of seconds per event, compared to several minutes for full simulation.

  16. Monte Carlo algorithm for simulating fermions on Lefschetz thimbles

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo

    2016-01-01

    A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with nonzero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple thimbles is important, even in the continuum limit.

  17. Monte Carlo simulations of random non-commutative geometries

    NASA Astrophysics Data System (ADS)

    Barrett, John W.; Glaser, Lisa

    2016-06-01

    Random non-commutative geometries are introduced by integrating over the space of Dirac operators that form a spectral triple with a fixed algebra and Hilbert space. The cases with the simplest types of Clifford algebra are investigated using Monte Carlo simulations to compute the integrals. Various qualitatively different types of behaviour of these random Dirac operators are exhibited. Some features are explained in terms of the theory of random matrices but other phenomena remain mysterious. Some of the models with a quartic action of symmetry-breaking type display a phase transition. Close to the phase transition the spectrum of a typical Dirac operator shows manifold-like behaviour for the eigenvalues below a cut-off scale.

  18. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  19. FREYA-a new Monte Carlo code for improved modeling of fission chains

    SciTech Connect

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  20. Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media

    NASA Astrophysics Data System (ADS)

    Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.

    2013-04-01

    Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.

  1. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    SciTech Connect

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-12-13

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations.

  2. Approaching chemical accuracy with quantum Monte Carlo.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2012-03-28

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space. PMID:22462844

  3. A Monte Carlo method for 3D thermal infrared radiative transfer

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liou, K. N.

    2006-09-01

    A 3D Monte Carlo model for specific application to the broadband thermal radiative transfer has been developed in which the emissivities for gases and cloud particles are parameterized by using a single cubic element as the building block in 3D space. For spectral integration in the thermal infrared, the correlated k-distribution method has been used for the sorting of gaseous absorption lines in multiple-scattering atmospheres involving 3D clouds. To check the Monte-Carlo simulation, we compare a variety of 1D broadband atmospheric fluxes and heating rates to those computed from the conventional plane-parallel (PP) model and demonstrate excellent agreement between the two. Comparisons of the Monte Carlo results for broadband thermal cooling rates in 3D clouds to those computed from the delta-diffusion approximation for 3D radiative transfer and the independent pixel-by-pixel approximation are subsequently carried out to understand the relative merits of these approaches.

  4. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, S.C.

    1998-12-01

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed. {copyright} {ital 1998 American Institute of Physics.}

  5. Quantum Monte Carlo calculations of light nuclei

    SciTech Connect

    Pieper, Steven C.

    1998-12-21

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H,{sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  6. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.

    1998-08-25

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on {sup 3}H, {sup 4}He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed.

  7. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  8. Spatial Correlations in Monte Carlo Criticality Simulations

    NASA Astrophysics Data System (ADS)

    Dumonteil, E.; Malvagi, F.; Zoia, A.; Mazzolo, A.; Artusio, D.; Dieudonné, C.; De Mulatier, C.

    2014-06-01

    Temporal correlations arising in Monte Carlo criticality codes have focused the attention of both developers and practitioners for a long time. Those correlations affects the evaluation of tallies of loosely coupled systems, where the system's typical size is very large compared to the diffusion/absorption length scale of the neutrons. These time correlations are closely related to spatial correlations, both variables being linked by the transport equation. Therefore this paper addresses the question of diagnosing spatial correlations in Monte Carlo criticality simulations. In that aim, we will propose a spatial correlation function well suited to Monte Carlo simulations, and show its use while simulating a fuel pin-cell. The results will be discussed, modeled and interpreted using the tools of branching processes of statistical mechanics. A mechanism called "neutron clustering", affecting simulations, will be discussed in this frame.

  9. Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation

    NASA Astrophysics Data System (ADS)

    Yang, J.; Li, J. S.; Qin, L.; Xiong, W.; Ma, C.-M.

    2004-06-01

    The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams.

  10. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    PubMed Central

    Siswantoro, Joko; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method. PMID:24892069

  11. Moments of spectral functions: Monte Carlo evaluation and verification.

    PubMed

    Predescu, Cristian

    2005-11-01

    The subject of the present study is the Monte Carlo path-integral evaluation of the moments of spectral functions. Such moments can be computed by formal differentiation of certain estimating functionals that are infinitely differentiable against time whenever the potential function is arbitrarily smooth. Here, I demonstrate that the numerical differentiation of the estimating functionals can be more successfully implemented by means of pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial interpolant), which utilize information from the entire interval . The algorithmic detail that leads to robust numerical approximations is the fact that the path-integral action and not the actual estimating functional are interpolated. Although the resulting approximation to the estimating functional is nonlinear, the derivatives can be computed from it in a fast and stable way by contour integration in the complex plane, with the help of the Cauchy integral formula (e.g., by Lyness' method). An interesting aspect of the present development is that Hamburger's conditions for a finite sequence of numbers to be a moment sequence provide the necessary and sufficient criteria for the computed data to be compatible with the existence of an inversion algorithm. Finally, the issue of appearance of the sign problem in the computation of moments, albeit in a milder form than for other quantities, is addressed. PMID:16383787

  12. Fast quantum Monte Carlo on a GPU

    NASA Astrophysics Data System (ADS)

    Lutsyshyn, Y.

    2015-02-01

    We present a scheme for the parallelization of quantum Monte Carlo method on graphical processing units, focusing on variational Monte Carlo simulation of bosonic systems. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent utilization of the accelerator. The CUDA code is provided along with a package that simulates liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the Kepler architecture K20 GPU. Special optimization was developed for the Kepler cards, including placement of data structures in the register space of the Kepler GPUs. Kepler-specific optimization is discussed.

  13. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  14. Geodesic Monte Carlo on Embedded Manifolds.

    PubMed

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  15. Monte Carlo dose computation for IMRT optimization*

    NASA Astrophysics Data System (ADS)

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  16. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  17. Monte Carlo simulation of an expanding gas

    NASA Technical Reports Server (NTRS)

    Boyd, Iain D.

    1989-01-01

    By application of simple computer graphics techniques, the statistical performance of two Monte Carlo methods used in the simulation of rarefied gas flows are assessed. Specifically, two direct simulation Monte Carlo (DSMC) methods developed by Bird and Nanbu are considered. The graphics techniques are found to be of great benefit in the reduction and interpretation of the large volume of data generated, thus enabling important conclusions to be drawn about the simulation results. Hence, it is discovered that the method of Nanbu suffers from increased statistical fluctuations, thereby prohibiting its use in the solution of practical problems.

  18. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  19. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  20. Kinetic Monte Carlo with fields: diffusion in heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Caro, Jose Alfredo

    2011-03-01

    It is commonly perceived that to achieve breakthrough scientific discoveries in the 21st century an integration of world leading experimental capabilities with theory, computational modeling and high performance computer simulations is necessary. Lying between the atomic and the macro scales, the meso scale is crucial for advancing materials research. Deterministic methods result computationally too heavy to cover length and time scales relevant for this scale. Therefore, stochastic approaches are one of the options of choice. In this talk I will describe recent progress in efficient parallelization schemes for Metropolis and kinetic Monte Carlo [1-2], and the combination of these ideas into a new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm developed to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields, incorporating in this way the actual driving force emerging from chemical potential gradients. Applications are shown on precipitation and segregation in nanostructured materials. Work in collaboration with E. Martinez, LANL, and with B. Sadigh, P. Erhart and A. Stukowsky, LLNL. Supported by the Center for Materials at Irradiation and Mechanical Extremes, an Energy Frontier Research Center funded by the U.S. Department of Energy (Award # 2008LANL1026) at Los Alamos National Laboratory

  1. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  2. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study.

    PubMed

    Zhang, Ying; Feng, Yuanming; Ming, Xin; Deng, Jun

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  3. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  4. Juan Carlos D'Olivo: A portrait

    NASA Astrophysics Data System (ADS)

    Aguilar-Arévalo, Alexis A.

    2013-06-01

    This report attempts to give a brief bibliographical sketch of the academic life of Juan Carlos D'Olivo, researcher and teacher at the Instituto de Ciencias Nucleares of UNAM, devoted to advancing the fields of High Energy Physics and Astroparticle Physics in Mexico and Latin America.

  5. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  6. MCMAC: Monte Carlo Merger Analysis Code

    NASA Astrophysics Data System (ADS)

    Dawson, William A.

    2014-07-01

    Monte Carlo Merger Analysis Code (MCMAC) aids in the study of merging clusters. It takes observed priors on each subcluster's mass, radial velocity, and projected separation, draws randomly from those priors, and uses them in a analytic model to get posterior PDF's for merger dynamic properties of interest (e.g. collision velocity, time since collision).

  7. A comparison of Monte Carlo generators

    NASA Astrophysics Data System (ADS)

    Golan, Tomasz

    2015-05-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π+ two-dimensional energy vs cosine distribution.

  8. Monte Carlo simulations of lattice gauge theories

    SciTech Connect

    Rebbi, C

    1980-02-01

    Monte Carlo simulations done for four-dimensional lattice gauge systems are described, where the gauge group is one of the following: U(1); SU(2); Z/sub N/, i.e., the subgroup of U(1) consisting of the elements e 2..pi..in/N with integer n and N; the eight-element group of quaternions, Q; the 24- and 48-element subgroups of SU(2), denoted by T and O, which reduce to the rotation groups of the tetrahedron and the octahedron when their centers Z/sub 2/, are factored out. All of these groups can be considered subgroups of SU(2) and a common normalization was used for the action. The following types of Monte Carlo experiments are considered: simulations of a thermal cycle, where the temperature of the system is varied slightly every few Monte Carlo iterations and the internal energy is measured; mixed-phase runs, where several Monte Carlo iterations are done at a few temperatures near a phase transition starting with a lattice which is half ordered and half disordered; measurements of averages of Wilson factors for loops of different shape. 5 figures, 1 table. (RWR)

  9. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  10. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  11. HepMCAnalyser: A tool for Monte Carlo generator validation

    NASA Astrophysics Data System (ADS)

    Ay, C.; Johnert, S.; Katzy, J.; Qin, Zhonghua

    2010-04-01

    HepMCAnalyser is a tool for Monte Carlo (MC) generator validation and comparisons. It is a stable, easy-to-use and extendable framework allowing for easy access/integration to generator level analysis. It comprises a class library with benchmark physics processes to analyse MC generator HepMC output and to fill root histograms. A web-interface is provided to display all or selected histogramms, compare to references and validate the results based on Kolmogorov Tests. Steerable example programs can be used for event generation. The default steering is tuned to optimally align the distributions of the different MC generators. The tool will be used for MC generator validation by the Generator Services (GENSER) LCG project, e.g. for version upgrades. It is supported on the same platforms as the GENSER libraries and is already in use at ATLAS.

  12. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  13. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  14. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  15. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    SciTech Connect

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-07-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  16. Monte Carlo Study of Real Time Dynamics on the Lattice.

    PubMed

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-08-19

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm. PMID:27588844

  17. Markov chain Mote Carlo solution of BK equation through Newton-Kantorovich method

    NASA Astrophysics Data System (ADS)

    BoŻek, Krzysztof; Kutak, Krzysztof; Placzek, Wieslaw

    2013-07-01

    We propose a new method for Monte Carlo solution of non-linear integral equations by combining the Newton-Kantorovich method for solving non-linear equations with the Markov Chain Monte Carlo (MCMC) method for solving linear equations. The Newton-Kantorovich method allows to express the non-linear equation as a system of the linear equations which then can be treated by the MCMC (random walk) algorithm. We apply this method to the Balitsky-Kovchegov (BK) equation describing evolution of gluon density at low x. Results of numerical computations show that the MCMC method is both precise and efficient. The presented algorithm may be particularly suited for solving more complicated and higher-dimensional non-linear integral equation, for which traditional methods become unfeasible.

  18. 75 FR 53332 - San Carlos Irrigation Project, Arizona

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... Bureau of Reclamation San Carlos Irrigation Project, Arizona AGENCY: Bureau of Reclamation, Interior... of San Carlos Irrigation Project (SCIP) water delivery facilities near the communities of Casa Grande... and Central Arizona Project (CAP) to agricultural lands in the San Carlos Irrigation and...

  19. Monte Carlo calculations for metal-semiconductor hot-electron injection via tunnel-junction emission

    NASA Astrophysics Data System (ADS)

    Appelbaum, Ian; Narayanamurti, V.

    2005-01-01

    We present a detailed description of a scheme to calculate the injection current for metal-semiconductor systems using tunnel-junction electron emission. We employ a Monte Carlo framework for integrating over initial free-electron states in a metallic emitter and use interfacial scattering at the metal-semiconductor interface as an independent parameter. These results have implications for modeling metal-base transistors and ballistic electron emission microscopy and spectroscopy.

  20. Quantum Monte Carlo calculations of light nuclei.

    SciTech Connect

    Pieper, S. C.; Physics

    2008-01-01

    Variational Monte Carlo and Green's function Monte Carlo are powerful tools for cal- culations of properties of light nuclei using realistic two-nucleon (NN) and three-nucleon (NNN) potentials. Recently the GFMC method has been extended to multiple states with the same quantum numbers. The combination of the Argonne v18 two-nucleon and Illinois-2 three-nucleon potentials gives a good prediction of many energies of nuclei up to 12 C. A number of other recent results are presented: comparison of binding energies with those obtained by the no-core shell model; the incompatibility of modern nuclear Hamiltonians with a bound tetra-neutron; difficulties in computing RMS radii of very weakly bound nuclei, such as 6He; center-of-mass effects on spectroscopic factors; and the possible use of an artificial external well in calculations of neutron-rich isotopes.

  1. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  2. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc. PMID:26226927

  3. Interaction picture density matrix quantum Monte Carlo.

    PubMed

    Malone, Fionn D; Blunt, N S; Shepherd, James J; Lee, D K K; Spencer, J S; Foulkes, W M C

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible. PMID:26233116

  4. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  5. Monte Carlo Methods in the Physical Sciences

    SciTech Connect

    Kalos, M H

    2007-06-06

    I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.

  6. Fast Lattice Monte Carlo Simulations of Polymers

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Zhang, Pengfei

    2014-03-01

    The recently proposed fast lattice Monte Carlo (FLMC) simulations (with multiple occupancy of lattice sites (MOLS) and Kronecker δ-function interactions) give much faster/better sampling of configuration space than both off-lattice molecular simulations (with pair-potential calculations) and conventional lattice Monte Carlo simulations (with self- and mutual-avoiding walk and nearest-neighbor interactions) of polymers.[1] Quantitative coarse-graining of polymeric systems can also be performed using lattice models with MOLS.[2] Here we use several model systems, including polymer melts, solutions, blends, as well as confined and/or grafted polymers, to demonstrate the great advantages of FLMC simulations in the study of equilibrium properties of polymers.

  7. Monte-Carlo Opening Books for Amazons

    NASA Astrophysics Data System (ADS)

    Kloetzer, Julien

    Automatically creating opening books is a natural step towards the building of strong game-playing programs, especially when there is little available knowledge about the game. However, while recent popular Monte-Carlo Tree-Search programs showed strong results for various games, we show here that programs based on such methods cannot efficiently use opening books created using algorithms based on minimax. To overcome this issue, we propose to use an MCTS-based technique, Meta-MCTS, to create such opening books. This method, while requiring some tuning to arrive at the best opening book possible, shows promising results to create an opening book for the game of the Amazons, even if this is at the cost of removing its Monte-Carlo part.

  8. Monte Carlo modeling of exospheric bodies - Mercury

    NASA Technical Reports Server (NTRS)

    Smith, G. R.; Broadfoot, A. L.; Wallace, L.; Shemansky, D. E.

    1978-01-01

    In order to study the interaction with the surface, a Monte Carlo program is developed to determine the distribution with altitude as well as the global distribution of density at the surface in a single operation. The analysis presented shows that the appropriate source distribution should be Maxwell-Boltzmann flux if the particles in the distribution are to be treated as components of flux. Monte Carlo calculations with a Maxwell-Boltzmann flux source are compared with Mariner 10 UV spectrometer data. Results indicate that the presently operating models are not capable of fitting the observed Mercury exosphere. It is suggested that an atmosphere calculated with a barometric source distribution is suitable for more realistic future exospheric models.

  9. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  10. Monte Carlo simulation of Alaska wolf survival

    NASA Astrophysics Data System (ADS)

    Feingold, S. J.

    1996-02-01

    Alaskan wolves live in a harsh climate and are hunted intensively. Penna's biological aging code, using Monte Carlo methods, has been adapted to simulate wolf survival. It was run on the case in which hunting causes the disruption of wolves' social structure. Social disruption was shown to increase the number of deaths occurring at a given level of hunting. For high levels of social disruption, the population did not survive.

  11. Obituary: Wayne Carlos Hendrickson, 1953-2007

    NASA Astrophysics Data System (ADS)

    Trimble, Virginia

    2007-12-01

    Wayne Carlos Hendrickson was born on born 4 December 1953 in Alhambra, California. He earned a BA/BS in Physics from the University of California at Irvine circa 1975. His PhD in astrophysics was earned in 1984 at the University of Texas at Austin. Hendrickson worked at Raytheon Corporation, for most of the rest of his life, on classified research. He died 8 August 2007.

  12. Linear-scaling quantum Monte Carlo calculations.

    PubMed

    Williamson, A J; Hood, R Q; Grossman, J C

    2001-12-10

    A method is presented for using truncated, maximally localized Wannier functions to introduce sparsity into the Slater determinant part of the trial wave function in quantum Monte Carlo calculations. When combined with an efficient numerical evaluation of these localized orbitals, the dominant cost in the calculation, namely, the evaluation of the Slater determinant, scales linearly with system size. This technique is applied to accurate total energy calculation of hydrogenated silicon clusters and carbon fullerenes containing 20-1000 valence electrons. PMID:11736525

  13. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  14. Carlos Castillo-Chavez: a century ahead.

    PubMed

    Schatz, James

    2013-01-01

    When the opportunity to contribute a short essay about Dr. Carlos Castillo-Chavez presented itself in the context of this wonderful birthday celebration my immediate reaction was por supuesto que sí! Sixteen years ago, I travelled to Cornell University with my colleague at the National Security Agency (NSA) Barbara Deuink to meet Carlos and hear about his vision to expand the talent pool of mathematicians in our country. Our motivation was very simple. First of all, the Agency relies heavily on mathematicians to carry out its mission. If the U.S. mathematics community is not healthy, NSA is not healthy. Keeping our country safe requires a team of the sharpest minds in the nation to tackle amazing intellectual challenges on a daily basis. Second, the Agency cares deeply about diversity. Within the mathematical sciences, students with advanced degrees from the Chicano, Latino, Native American, and African-American communities are underrepresented. It was clear that addressing this issue would require visionary leadership and a long-term commitment. Carlos had the vision for a program that would provide promising undergraduates from minority communities with an opportunity to gain confidence and expertise through meaningful research experiences while sharing in the excitement of mathematical and scientific discovery. His commitment to the venture was unquestionable and that commitment has not waivered since the inception of the Mathematics and Theoretical Biology Institute (MTBI) in 1996. PMID:24245638

  15. Numerical reproducibility for implicit Monte Carlo simulations

    SciTech Connect

    Cleveland, M.; Brunner, T.; Gentile, N.

    2013-07-01

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)

  16. jTracker and Monte Carlo Comparison

    NASA Astrophysics Data System (ADS)

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  17. Monte Carlo dose mapping on deforming anatomy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  18. A configuration space Monte Carlo algorithm for solving the nuclear pairing problem

    NASA Astrophysics Data System (ADS)

    Lingle, Mark

    Nuclear pairing correlations using Quantum Monte Carlo are studied in this dissertation. We start by defining the nuclear pairing problem and discussing several historical methods developed to solve this problem, paying special attention to the applicability of such methods. A numerical example discussing pairing correlations in several calcium isotopes using the BCS and Exact Pairing solutions are presented. The ground state energies, correlation energies, and occupation numbers are compared to determine the applicability of each approach to realistic cases. Next we discuss some generalities related to the theory of Markov Chains and Quantum Monte Carlo in regards to nuclear structure. Finally we present our configuration space Monte Carlo algorithm starting from a discussion of a path integral approach by the authors. Some general features of the Pairing Hamiltonian that boost the effectiveness of a configuration space Monte Carlo approach are mentioned. The full details of our method are presented and special attention is paid to convergence and error control. We present a series of examples illustrating the effectiveness of our approach. These include situations with non-constant pairing strengths, limits when pairing correlations are weak, the computation of excited states, and problems when the relevant configuration space is large. We conclude with a chapter examining some of the effects of continuum states in 24O.

  19. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.

    PubMed

    Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  20. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  1. Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew; Hirata, So

    2015-06-01

    Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.

  2. Monte Carlo autofluorescence modeling of cervical intraepithelial neoplasm progression

    NASA Astrophysics Data System (ADS)

    Chu, S. C.; Chiang, H. K.; Wu, C. E.; He, S. Y.; Wang, D. Y.

    2006-02-01

    Monte Carlo fluorescence model has been developed to estimate the autofluorescent spectra associated with the progression of the Exo-Cervical Intraepithelial Neoplasm (CIN). We used double integrating spheres system and a tunable light source system, 380 to 600 nm, to measure the reflection and transmission spectra of a 50 μm thick tissue, and used Inverse Adding-Doubling (IAD) method to estimate the absorption (μa) and scattering (μs) coefficients. Human cervical tissue samples were sliced vertically (longitudinal) by the frozen section method. The results show that the absorption and scattering coefficients of cervical neoplasia are 2~3 times higher than normal tissues. We applied Monte Carlo method to estimate photon distribution and fluorescence emission in the tissue. By combining the intrinsic fluorescence information (collagen, NADH, and FAD), the anatomical information of the epithelium, CIN, stroma layers, and the fluorescence escape function, the autofluorescence spectra of CIN at different development stages were obtained.We have observed that the progression of the CIN results in gradually decreasing of the autofluorescence intensity of collagen peak intensity. In addition, the existence of the CIN layer formeda barrier that blocks the autofluorescence escaping from the stroma layer due to the strong extinction(scattering and absorption) of the CIN layer. To our knowledge, this is the first study measuring the CIN optical properties in the visible range; it also successfully demonstrates the fluorescence model forestimating autofluorescence spectra of cervical tissue associated with the progression of the CIN tissue;this model is very important in assisting the CIN diagnosis and treatment in clinical medicine.

  3. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  4. Monte Carlo simulations for generic granite repository studies

    SciTech Connect

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  5. Path Integrals and Supersolids

    NASA Astrophysics Data System (ADS)

    Ceperley, D. M.

    2008-11-01

    Recent experiments by Kim and Chan on solid 4He have been interpreted as discovery of a supersolid phase of matter. Arguments based on wavefunctions have shown that such a phase exists, but do not necessarily apply to solid 4He. Imaginary time path integrals, implemented using Monte Carlo methods, provide a definitive answer; a clean system of solid 4He should be a normal quantum solid, not one with superfluid properties. The Kim-Chan phenomena must be due to defects introduced when the solid is formed.

  6. Implicit Monte Carlo diffusion - an acceleration method for Monte Carlo time dependent radiative transfer simulations

    SciTech Connect

    Gentile, N A

    2000-10-01

    We present a method for accelerating time dependent Monte Carlo radiative transfer calculations by using a discretization of the diffusion equation to calculate probabilities that are used to advance particles in regions with small mean free path. The method is demonstrated on problems with on 1 and 2 dimensional orthogonal grids. It results in decreases in run time of more than an order of magnitude on these problems, while producing answers with accuracy comparable to pure IMC simulations. We call the method Implicit Monte Carlo Diffusion, which we abbreviate IMD.

  7. Improving dynamical lattice QCD simulations through integrator tuning using Poisson brackets and a force-gradient integrator

    SciTech Connect

    Clark, Michael A.; Joo, Balint; Kennedy, Anthony D.; Silva, Paolo J.

    2011-10-01

    We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H~. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.

  8. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  9. “Full Model” Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    SciTech Connect

    Bauge, E.

    2015-01-15

    The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  10. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  11. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  12. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms. PMID:17033783

  13. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  14. Ab initio quantum Monte Carlo simulations of the uniform electron gas without fixed nodes: The unpolarized case

    NASA Astrophysics Data System (ADS)

    Dornheim, T.; Groth, S.; Schoof, T.; Hann, C.; Bonitz, M.

    2016-05-01

    In a recent publication [S. Groth et al., Phys. Rev. B 93, 085102 (2016), 10.1103/PhysRevB.93.085102], we have shown that the combination of two complementary quantum Monte Carlo approaches, namely configuration path integral Monte Carlo [T. Schoof et al., Phys. Rev. Lett. 115, 130402 (2015), 10.1103/PhysRevLett.115.130402] and permutation blocking path integral Monte Carlo [T. Dornheim et al., New J. Phys. 17, 073017 (2015), 10.1088/1367-2630/17/7/073017], allows for the accurate computation of thermodynamic properties of the spin-polarized uniform electron gas over a wide range of temperatures and densities without the fixed-node approximation. In the present work, we extend this concept to the unpolarized case, which requires nontrivial enhancements that we describe in detail. We compare our simulation results with recent restricted path integral Monte Carlo data [E. W. Brown et al., Phys. Rev. Lett. 110, 146405 (2013), 10.1103/PhysRevLett.110.146405] for different energy contributions and pair distribution functions and find, for the exchange correlation energy, overall better agreement than for the spin-polarized case, while the separate kinetic and potential contributions substantially deviate.

  15. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  16. Discovering correlated fermions using quantum Monte Carlo.

    PubMed

    Wagner, Lucas K; Ceperley, David M

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859

  17. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  18. Monte Carlo radiation transport¶llelism

    SciTech Connect

    Cox, L. J.; Post, S. E.

    2002-01-01

    This talk summarizes the main aspects of the LANL ASCI Eolus project and its major unclassified code project, MCNP. The MCNP code provide a state-of-the-art Monte Carlo radiation transport to approximately 3000 users world-wide. Almost all hardware platforms are supported because we strictly adhere to the FORTRAN-90/95 standard. For parallel processing, MCNP uses a mixture of OpenMp combined with either MPI or PVM (shared and distributed memory). This talk summarizes our experiences on various platforms using MPI with and without OpenMP. These platforms include PC-Windows, Intel-LINUX, BlueMountain, Frost, ASCI-Q and others.

  19. Monte Carlo simulation for the transport beamline

    SciTech Connect

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  20. Quantum Monte Carlo calculations for light nuclei.

    SciTech Connect

    Wiringa, R. B.

    1998-10-23

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 40 different (J{pi}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  1. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  2. Modulated pulse bathymetric lidar Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia

    2015-10-01

    A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.

  3. A Monte Carlo algorithm for degenerate plasmas

    SciTech Connect

    Turrell, A.E. Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  4. Monte Carlo simulation of the enantioseparation process

    NASA Astrophysics Data System (ADS)

    Bustos, V. A.; Acosta, G.; Gomez, M. R.; Pereyra, V. D.

    2012-09-01

    By means of Monte Carlo simulation, a study of enantioseparation by capillary electrophoresis has been carried out. A simplified system consisting of two enantiomers S (R) and a selector chiral C, which reacts with the enantiomers to form complexes RC (SC), has been considered. The dependence of Δμ (enantioseparation) with the concentration of chiral selector and with temperature have been analyzed by simulation. The effect of the binding constant and the charge of the complexes are also analyzed. The results are qualitatively satisfactory, despite the simplicity of the model.

  5. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  6. Kinetic Monte Carlo simulations of proton conductivity

    NASA Astrophysics Data System (ADS)

    Masłowski, T.; Drzewiński, A.; Ulner, J.; Wojtkiewicz, J.; Zdanowska-Frączek, M.; Nordlund, K.; Kuronen, A.

    2014-07-01

    The kinetic Monte Carlo method is used to model the dynamic properties of proton diffusion in anhydrous proton conductors. The results have been discussed with reference to a two-step process called the Grotthuss mechanism. There is a widespread belief that this mechanism is responsible for fast proton mobility. We showed in detail that the relative frequency of reorientation and diffusion processes is crucial for the conductivity. Moreover, the current dependence on proton concentration has been analyzed. In order to test our microscopic model the proton transport in polymer electrolyte membranes based on benzimidazole C7H6N2 molecules is studied.

  7. Monte Carlo analysis of magnetic aftereffect phenomena

    NASA Astrophysics Data System (ADS)

    Andrei, Petru; Stancu, Alexandru

    2006-04-01

    Magnetic aftereffect phenomena are analyzed by using the Monte Carlo technique. This technique has the advantage that it can be applied to any model of hysteresis. It is shown that a log t-type dependence of the magnetization can be qualitatively predicted even in the framework of hysteresis models with local history, such as the Jiles-Atherton model. These models are computationally much more efficient than the models with global history such as the Preisach model. Numerical results related to the decay of the magnetization as of function of time, as well as to the viscosity coefficient, are presented.

  8. MILAGRO IMPLICIT MONTE CARLO: NEW CAPABILITIES AND RESULTS

    SciTech Connect

    T. URBATSCH; T. EVANS

    2000-12-01

    Milagro is a stand-alone, radiation-only, code that performs nonlinear radiative transfer calculations using the Fleck and Cummings method of Implicit Monte Carlo (IMC). Milagro is an object-oriented, C++ code that utilizes classes in our group's (CCS-4) radiation transport library. Milagro and its underlying classes have been significantly upgraded since 1998, when results from Milagro were first presented. Most notably, the object-oriented design has been revised to allow for optimal stand-alone parallel efficiency and rapid integration of new classes. For example, the better design, coupled with stringent component testing, allowed for immediate integration of the full domain decomposition parallel scheme. (It is a simple philosophy: spend time on the design, and debug early and once.) Milagro's classes are templated on mesh type. Currently, it runs on an orthogonal, structured, not-necessarily-uniform, Cartesian mesh of up to three dimensions, an RZ-Wedge mesh, and soon a tetrahedral mesh. Milagro considers one-frequency, or ''grey,'' radiation with isotropic scattering, user-defined analytic opacities and equation-of-state, and various source types: surface, material, and radiation. Tallies produced by Milagro include energy and momentum deposition. In parallel, Milagro can run on a mesh that is fully replicated on all processors or on a mesh that is fully decomposed in the spatial domain. Milagro is reproducible, regardless of number of processors or parallel topology, and it now exactly conserves energy both globally and locally. Milagro has the capability for EnSight graphics and restarting. Finally, Milagro has been well verified with its use of Design-by-Contract{trademark}, component tests, and regression tests, and with its agreement to results of analytic test problems. By successfully running analytic and benchmark problems, Milagro serves to integrally verify all of its underlying classes, thus paving the way for other service packages based on these

  9. Quantum Monte Carlo : not just for energy levels.

    SciTech Connect

    Nollett, K. M.; Physics

    2007-01-01

    Quantum Monte Carlo and realistic interactions can provide well-motivated vertices and overlaps for DWBA analyses of reactions. Given an interaction in vaccum, there are several computational approaches to nuclear systems, as you have been hearing: No-core shell model with Lee-Suzuki or Bloch-Horowitz for Hamiltonian Coupled clusters with G-matrix interaction Density functional theory, granted an energy functional derived from the interaction Quantum Monte Carlo - Variational Monte Carlo Green's function Monte Carlo. The last two work directly with a bare interaction and bare operators and describe the wave function without expanding in basis functions, so they have rather different sets of advantages and disadvantages from the others. Variational Monte Carlo (VMC) is built on a sophisticated Ansatz for the wave function, built on shell model like structure modified by operator correlations. Green's function Monte Carlo (GFMC) uses an operator method to project the true ground state out of a reasonable guess wave function.

  10. 3. Photographic copy of map. San Carlos Project, Arizona. Irrigation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Photographic copy of map. San Carlos Project, Arizona. Irrigation System. Department of the Interior. United States Indian Service. No date. Circa 1939. (Source: Henderson, Paul. U.S. Indian Irrigation Service. Supplemental Storage Reservoir, Gila River. November 10, 1939, RG 115, San Carlos Project, National Archives, Rocky Mountain Region, Denver, CO.) - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  11. Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.

    2016-03-01

    The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.

  12. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution

    NASA Astrophysics Data System (ADS)

    Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.

    2015-11-01

    In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].

  13. Raga: Monte Carlo simulations of gravitational dynamics of non-spherical stellar systems

    NASA Astrophysics Data System (ADS)

    Vasiliev, Eugene

    2014-11-01

    Raga (Relaxation in Any Geometry) is a Monte Carlo simulation method for gravitational dynamics of non-spherical stellar systems. It is based on the SMILE software (ascl:1308.001) for orbit analysis. It can simulate stellar systems with a much smaller number of particles N than the number of stars in the actual system, represent an arbitrary non-spherical potential with a basis-set or spline spherical-harmonic expansion with the coefficients of expansion computed from particle trajectories, and compute particle trajectories independently and in parallel using a high-accuracy adaptive-timestep integrator. Raga can also model two-body relaxation by local (position-dependent) velocity diffusion coefficients (as in Spitzer's Monte Carlo formulation) and adjust the magnitude of relaxation to the actual number of stars in the target system, and model the effect of a central massive black hole.

  14. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    PubMed

    Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions. PMID:26723642

  15. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach

    NASA Astrophysics Data System (ADS)

    Armas-Pérez, Julio C.; Hernández-Ortiz, Juan P.; de Pablo, Juan J.

    2015-12-01

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  16. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  17. Four decades of implicit Monte Carlo

    DOE PAGESBeta

    Wollaber, Allan B.

    2016-04-25

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  18. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGESBeta

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  19. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  20. Composite biasing in Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Baes, Maarten; Gordon, Karl D.; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-05-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the specific problems that we consider: in simulations with composite path length stretching, high accuracy results are obtained even for simulations with modest numbers of photon packages, while simulations without biasing cannot reach convergence, even with a huge number of photon packages.

  1. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  2. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  3. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  5. Quantum Monte Carlo methods for nuclear physics

    DOE PAGESBeta

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  6. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  7. Metallic lithium by quantum Monte Carlo

    SciTech Connect

    Sugiyama, G.; Zerah, G.; Alder, B.J.

    1986-12-01

    Lithium was chosen as the simplest known metal for the first application of quantum Monte Carlo methods in order to evaluate the accuracy of conventional one-electron band theories. Lithium has been extensively studied using such techniques. Band theory calculations have certain limitations in general and specifically in their application to lithium. Results depend on such factors as charge shape approximations (muffin tins), pseudopotentials (a special problem for lithium where the lack of rho core states requires a strong pseudopotential), and the form and parameters chosen for the exchange potential. The calculations are all one-electron methods in which the correlation effects are included in an ad hoc manner. This approximation may be particularly poor in the high compression regime, where the core states become delocalized. Furthermore, band theory provides only self-consistent results rather than strict limits on the energies. The quantum Monte Carlo method is a totally different technique using a many-body rather than a mean field approach which yields an upper bound on the energies. 18 refs., 4 figs., 1 tab.

  8. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  9. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  10. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  11. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  12. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  13. Monte Carlo methods in lattice gauge theories

    SciTech Connect

    Otto, S.W.

    1983-01-01

    The mass of the O/sup +/ glueball for SU(2) gauge theory in 4 dimensions is calculated. This computation was done on a prototype parallel processor and the implementation of gauge theories on this system is described in detail. Using an action of the purely Wilson form (tract of plaquette in the fundamental representation), results with high statistics are obtained. These results are not consistent with scaling according to the continuum renormalization group. Using actions containing higher representations of the group, a search is made for one which is closer to the continuum limit. The choice is based upon the phase structure of these extended theories and also upon the Migdal-Kadanoff approximation to the renormalizaiton group on the lattice. The mass of the O/sup +/ glueball for this improved action is obtained and the mass divided by the square root of the string tension is a constant as the lattice spacing is varied. The other topic studied is the inclusion of dynamical fermions into Monte Carlo calculations via the pseudo fermion technique. Monte Carlo results obtained with this method are compared with those from an exact algorithm based on Gauss-Seidel inversion. First applied were the methods to the Schwinger model and SU(3) theory.

  14. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    SciTech Connect

    Grimes, Joshua; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  15. Monte Carlo techniques for analyzing deep-penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-02-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications.

  16. Monte Carlo modeling of spatial coherence: free-space diffraction.

    PubMed

    Fischer, David G; Prahl, Scott A; Duncan, Donald D

    2008-10-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  17. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  18. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon

  19. The INTEGRAL scatterometer SPI

    NASA Technical Reports Server (NTRS)

    Mandrou, P.; Vedrenne, G.; Jean, P.; Kandel, B.; vonBallmoos, P.; Albernhe, F.; Lichti, G.; Schoenfelder, V.; Diehl, R.; Georgii, R.; Teegarden, B.; Mandrou, P.; Vedrenne, G.; Kirchner, T.; Durouchoux, P.; Cordier, B.; Diallo, N.; Sanchez, F.; Payne, B.; Leleux, P.; Caraveo, P.; Matteson, J.; Slassi-Sennon, S.; Lin, R. P.; Skinner, G.

    1997-01-01

    The INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) mission's onboard spectrometer, the INTEGRAL spectrometer (SPI), is described. The SPI constitutes one of the four main mission instruments. It is optimized for detailed measurements of gamma ray lines and for the mapping of diffuse sources. It combines a coded aperture mask with an array of large volume, high purity germanium detectors. The detectors make precise measurements of the gamma ray energies over the 20 keV to 8 MeV range. The instrument's characteristics are described and the Monte Carlo simulation of its performance is outlined. It will be possible to study gamma ray emission from compact objects or line profiles with a high energy resolution and a high angular resolution.

  20. Theory and Applications of Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Deible, Michael John

    With the development of peta-scale computers and exa-scale only a few years away, the quantum Monte Carlo (QMC) method, with favorable scaling and inherent parrallelizability, is poised to increase its impact on the electronic structure community. The most widely used variation of QMC is the diffusion Monte Carlo (DMC) method. The accuracy of the DMC method is only limited by the trial wave function that it employs. The effect of the trial wave function is studied here by initially developing correlation-consistent Gaussian basis sets for use in DMC calculations. These basis sets give a low variance in variance Monte Carlo calculations and improved convergence in DMC. The orbital type used in the trial wave function is then investigated, and it is shown that Brueckner orbitals result in a DMC energy comparable to a DMC energy with orbitals from density functional theory and significantly lower than orbitals from Hartree-Fock theory. Three large weakly interacting systems are then studied; a water-16 isomer, a methane clathrate, and a carbon dioxide clathrate. The DMC method is seen to be in good agreement with MP2 calculations and provides reliable benchmarks. Several strongly correlated systems are then studied. An H4 model system that allows for a fine tuning of the multi-configurational character of the wave function shows when the accuracy of the DMC method with a single Slater-determinant trial function begins to deviate from multi-reference benchmarks. The weakly interacting face-to-face ethylene dimer is studied with and without a rotation around the pi bond, which is used to increase the multi-configurational nature of the wave function. This test shows that the effect of a multi-configurational wave function in weakly interacting systems causes DMC with a single Slater-determinant to be unable to achieve sub-chemical accuracy. The beryllium dimer is studied, and it is shown that a very large determinant expansion is required for DMC to predict a binding

  1. Coupling between the Liouville equation and a classical Monte Carlo solver for the simulation of electron transport in resonant tunneling diodes

    NASA Astrophysics Data System (ADS)

    Martín, F.; García-García, J.; Oriols, X.; Suñé, J.

    1999-02-01

    A coupling model between a classical Monte Carlo simulator and a Liouville equation solver has been proposed with application to the simulation of vertical transport quantum devices in which extensive regions of the simulation domain behave classically. These devices can be partitioned in regions in which either a classical (Monte Carlo) or a quantum (Wigner formalism) treatment of carrier transport is required making a coupling scheme between adjacent regions necessary. According to this aim, the boundary conditions inferred from the Monte Carlo solver for the integration of the Liouville equation in the quantum regions, as well as the injecting scheme to the Monte Carlo regions provided by the Wigner distribution function at the boundaries have been earlier established. The results of this work, using a resonant tunneling diode as a reference device, show that the proposed technique is promising for the simulation of electron transport in quantum devices.

  2. Monte Carlo simulation of light fluence calculation during pleural PDT

    NASA Astrophysics Data System (ADS)

    Meo, Julia L.; Zhu, Timothy

    2013-03-01

    A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.

  3. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  4. A Monte Carlo simulation approach for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal

    2016-04-01

    Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.

  5. Non-analog Monte Carlo estimators for radiation momentum deposition

    SciTech Connect

    Densmore, Jeffery D; Hykes, Joshua M

    2008-01-01

    The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.

  6. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or Bose glass'' insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  7. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or ``Bose glass`` insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  8. Hydrodynamic shock wave studies within a kinetic Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Howell, Jim; Pickett, Rodney; Staber, Alec; Strother, Terrance

    2014-06-01

    We introduce a massively parallelized test-particle based kinetic Monte Carlo code that is capable of modeling the phase space evolution of an arbitrarily sized system that is free to move in and out of the continuum limit. Our code combines advantages of the DSMC and the Point of Closest Approach techniques for solving the collision integral. With that, it achieves high spatial accuracy in simulations of large particle systems while maintaining computational feasibility. Using particle mean free paths which are small with respect to the characteristic length scale of the simulated system, we reproduce hydrodynamic behavior. To demonstrate that our code can retrieve continuum solutions, we perform a test-suite of classic hydrodynamic shock problems consisting of the Sod, the Noh, and the Sedov tests. We find that the results of our simulations which apply millions of test-particles match the analytic solutions well. In addition, we take advantage of the ability of kinetic codes to describe matter out of the continuum regime when applying large particle mean free paths. With that, we study and compare the evolution of shock waves in the hydrodynamic limit and in a regime which is not reachable by hydrodynamic codes.

  9. Markov chain Monte Carlo methods: an introductory example

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  10. Bond breaking with auxiliary-field quantum Monte Carlo.

    PubMed

    Al-Saidi, W A; Zhang, Shiwei; Krakauer, Henry

    2007-10-14

    Bond stretching mimics different levels of electron correlation and provides a challenging test bed for approximate many-body computational methods. Using the recently developed phaseless auxiliary-field quantum Monte Carlo (AF QMC) method, we examine bond stretching in the well-studied molecules BH and N(2) and in the H(50) chain. To control the sign/phase problem, the phaseless AF QMC method constrains the paths in the auxiliary-field path integrals with an approximate phase condition that depends on a trial wave function. With single Slater determinants from unrestricted Hartree-Fock as trial wave function, the phaseless AF QMC method generally gives better overall accuracy and a more uniform behavior than the coupled cluster CCSD(T) method in mapping the potential-energy curve. In both BH and N(2), we also study the use of multiple-determinant trial wave functions from multiconfiguration self-consistent-field calculations. The increase in computational cost versus the gain in statistical and systematic accuracy are examined. With such trial wave functions, excellent results are obtained across the entire region between equilibrium and the dissociation limit. PMID:17935380

  11. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  12. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  13. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  14. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well. PMID:27081724

  15. Coherent scatter imaging Monte Carlo simulation.

    PubMed

    Hassan, Laila; MacDonald, Carolyn A

    2016-07-01

    Conventional mammography can suffer from poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter slot scan imaging is an imaging technique which provides additional information and is compatible with conventional mammography. A Monte Carlo simulation of coherent scatter slot scan imaging was performed to assess its performance and provide system optimization. Coherent scatter could be exploited using a system similar to conventional slot scan mammography system with antiscatter grids tilted at the characteristic angle of cancerous tissues. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The simulated carcinomas were detectable for tumors as small as 5 mm in diameter, so coherent scatter analysis using a wide-slot setup could be promising as an enhancement for screening mammography. Employing coherent scatter information simultaneously with conventional mammography could yield a conventional high spatial resolution image with additional coherent scatter information. PMID:27610397

  16. Green's function Monte Carlo in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    We review the status of Green's Function Monte Carlo (GFMC) methods as applied to problems in nuclear physics. New methods have been developed to handle the spin and isospin degrees of freedom that are a vital part of any realistic nuclear physics problem, whether at the level of quarks or nucleons. We discuss these methods and then summarize results obtained recently for light nuclei, including ground state energies, three-body forces, charge form factors and the coulomb sum. As an illustration of the applicability of GFMC to quark models, we also consider the possible existence of bound exotic multi-quark states within the framework of flux-tube quark models. 44 refs., 8 figs., 1 tab.

  17. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  18. MORSE Monte Carlo radiation transport code system

    SciTech Connect

    Emmett, M.B.

    1983-02-01

    This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)

  19. Exploring theory space with Monte Carlo reweighting

    DOE PAGESBeta

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  20. Monte Carlo modeling and meteor showers

    NASA Astrophysics Data System (ADS)

    Kulikova, N. V.

    1987-08-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  1. Quantum Monte Carlo simulations in novel geometries

    NASA Astrophysics Data System (ADS)

    Iglovikov, Vladimir

    Quantum Monte Carlo simulations are giving increasing insight into the physics of strongly interacting bosons, spins, and fermions. Initial work focused on the simplest geometries, like a 2D square lattice. Increasingly, modern research is turning to more rich structures such as honeycomb lattice of graphene, the Lieb lattice of the CuO2 planes of cuprate superconductors, the triangular lattice, and coupled layers. These new geometries possess unique features which affect the physics in profound ways, eg a vanishing density of states and relativistic dispersion ("Dirac point'') of a honeycomb lattice, frustration on a triangular lattice, and a flat bands on a Lieb lattice. This thesis concerns both exploring the performance of QMC algorithms on different geometries(primarily via the "sign problem'') and also applying those algorithms to several interesting open problems.

  2. Optimized trial functions for quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Huang, Sheng-Yu; Sun, Zhiwei; Lester, William A., Jr.

    1990-01-01

    An algorithm to optimize trial functions for fixed-node quantum Monte Carlo calculations has been developed based on variational random walks. The approach is applied to wave functions that are products of a simple Slater determinant and correlation factor explicitly dependent on interelectronic distance, and is found to provide improved ground-state total energies. A modification of the method for ground-states that makes use of a projection operator technique is shown to make possible the calculation of more accurate excited-state energies. In this optimization method the Young tableaux of the permutation group is used to facilitate the treatment of fermion properties and multiplets. Application to ground states of H2, Li2, H3, H+3, and to the first-excited singlets of H2, H3, and H4 are presented and discussed.

  3. Optimized trial functions for quantum Monte Carlo

    SciTech Connect

    Huang, S.; Sun, Z.; Lester, W.A. Jr. )

    1990-01-01

    An algorithm to optimize trial functions for fixed-node quantum Monte Carlo calculations has been developed based on variational random walks. The approach is applied to wave functions that are products of a simple Slater determinant and correlation factor explicitly dependent on interelectronic distance, and is found to provide improved ground-state total energies. A modification of the method for ground-states that makes use of a projection operator technique is shown to make possible the calculation of more accurate excited-state energies. In this optimization method the Young tableaux of the permutation group is used to facilitate the treatment of fermion properties and multiplets. Application to ground states of H{sub 2}, Li{sub 2}, H{sub 3}, H{sup +}{sub 3}, and to the first-excited singlets of H{sub 2}, H{sub 3}, and H{sub 4} are presented and discussed.

  4. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  5. San Carlos Apache Tribe - Energy Organizational Analysis

    SciTech Connect

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  6. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  7. Monte-Carlo Simulation Balancing in Practice

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii

    Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.

  8. Monte Carlo simulations in Nuclear Medicine

    SciTech Connect

    Loudos, George K.

    2007-11-26

    Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

  9. Monte Carlo simulations in Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Loudos, George K.

    2007-11-01

    Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

  10. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  11. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  12. Quantum Monte Carlo studies on small molecules

    NASA Astrophysics Data System (ADS)

    Galek, Peter T. A.; Handy, Nicholas C.; Lester, William A., Jr.

    The Variational Monte Carlo (VMC) and Fixed-Node Diffusion Monte Carlo (FNDMC) methods have been examined, through studies on small molecules. New programs have been written which implement the (by now) standard algorithms for VMC and FNDMC. We have employed and investigated throughout our studies the accuracy of the common Slater-Jastrow trial wave function. Firstly, we have studied a range of sizes of the Jastrow correlation function of the Boys-Handy form, obtained using our optimization program with analytical derivatives of the central moments in the local energy. Secondly, we have studied the effects of Slater-type orbitals (STOs) that display the exact cusp behaviour at nuclei. The orbitals make up the all important trial determinant, which determines the fixed nodal surface. We report all-electron calculations for the ground state energies of Li2, Be2, H2O, NH3, CH4 and H2CO, in all cases but one with accuracy in excess of 95%. Finally, we report an investigation of the ground state energies, dissociation energies and ionization potentials of NH and NH+. Recent focus paid in the literature to these species allow for an extensive comparison with other ab initio methods. We obtain accurate properties for the species and reveal a favourable tendency for fixed-node and other systematic errors to cancel. As a result of our accurate predictions, we are able to obtain a value for the heat of formation of NH, which agrees to within less than 1 kcal mol-1 to other ab initio techniques and 0.2 kcal mol-1 of the experimental value.

  13. Monte Carlo scatter correction for SPECT

    NASA Astrophysics Data System (ADS)

    Liu, Zemei

    The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.

  14. Monte Carlo Test Assembly for Item Pool Analysis and Extension

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.

    2005-01-01

    A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…

  15. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  16. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  17. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  18. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  19. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  20. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  1. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  2. 4. Photographic copy of map. San Carlos Irrigation Project, Gila ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Photographic copy of map. San Carlos Irrigation Project, Gila River Indian Reservation, Pinal County, Arizona. Department of the Interior. Office of Indian Affairs. 1940. (Source: SCIP Office, Coolidge, AZ) Photograph is an 8'x10' enlargement from a 4'x5' negative. - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  3. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  4. Integration and Integrity.

    ERIC Educational Resources Information Center

    Cassano, Paul; Antol, Rayna A.

    2001-01-01

    Explains two middle school teachers' cooperation with integrating regular and gifted students with disabled students. Focuses on disabled students' collaboration with their peers and their social skill development rather than their academic development. (YDS)

  5. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  6. A novel pretreatment method of three-dimensional fluorescence data for quantitative measurement of component contents in mixture.

    PubMed

    Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei

    2015-04-01

    Three-dimensional fluorescence technique is commonly used for the determination of component contents in the mixture. Fluorescence intensity data are used directly in the fluorescent spectrum data processing method. The relationship between fluorescence intensity values and concentrations is linear. Random noise is inevitable in the process of measuring due to fluorescence spectrometer. The measurement accuracy is reduced due to the existence of noise. To reduce random noise and improve the measurement sensitivity, a novel pretreatment method of three-dimensional fluorescence data is proposed. The method is based on Quasi-Monte-Carlo integral. Due to the increased slope of fluorescence intensity data during the integral, the measurement sensitivity is improved. At the same time, the sum of different exponentials of fluorescence intensity at the points reduces the random noise, so the measurement sensitivity is improved more. The recovery rates of the mixture mixed by gasoline, kerosene and diesel oil are calculated to validate the effectiveness of the method. PMID:25638431

  7. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    NASA Astrophysics Data System (ADS)

    Aaltonen, T.; Álvarez González, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J. A.; Apresyan, A.; Arisawa, T.; Artikov, A.; Asaadi, J.; Ashmanskas, W.; Auerbach, B.; Aurisano, A.; Azfar, F.; Badgett, W.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Barria, P.; Bartos, P.; Bauce, M.; Bauer, G.; Bedeschi, F.; Beecher, D.; Behari, S.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Beretvas, A.; Bhatti, A.; Binkley, M.; Bisello, D.; Bizjak, I.; Bland, K. R.; Blumenfeld, B.; Bocci, A.; Bodek, A.; Bortoletto, D.; Boudreau, J.; Boveia, A.; Brau, B.; Brigliadori, L.; Brisuda, A.; Bromberg, C.; Brucken, E.; Bucciantonio, M.; Budagov, J.; Budd, H. S.; Budd, S.; Burkett, K.; Busetto, G.; Bussey, P.; Buzatu, A.; Calancha, C.; Camarda, S.; Campanelli, M.; Campbell, M.; Canelli, F.; Canepa, A.; Carls, B.; Carlsmith, D.; Carosi, R.; Carrillo, S.; Carron, S.; Casal, B.; Casarsa, M.; Castro, A.; Catastini, P.; Cauz, D.; Cavaliere, V.; Cavalli-Sforza, M.; Cerri, A.; Cerrito, L.; Chen, Y. C.; Chertok, M.; Chiarelli, G.; Chlachidze, G.; Chlebana, F.; Cho, K.; Chokheli, D.; Chou, J. P.; Chung, W. H.; Chung, Y. S.; Ciobanu, C. I.; Ciocci, M. A.; Clark, A.; Compostella, G.; Convery, M. E.; Conway, J.; Corbo, M.; Cordelli, M.; Cox, C. A.; Cox, D. J.; Crescioli, F.; Cuenca Almenar, C.; Cuevas, J.; Culbertson, R.; Dagenhart, D.; D'Ascenzo, N.; Datta, M.; de Barbaro, P.; de Cecco, S.; de Lorenzo, G.; Dell'Orso, M.; Deluca, C.; Demortier, L.; Deng, J.; Deninno, M.; Devoto, F.; D'Errico, M.; di Canto, A.; di Ruzza, B.; Dittmann, J. R.; D'Onofrio, M.; Donati, S.; Dong, P.; Dorigo, T.; Ebina, K.; Elagin, A.; Eppig, A.; Erbacher, R.; Errede, D.; Errede, S.; Ershaidat, N.; Eusebi, R.; Fang, H. C.; Farrington, S.; Feindt, M.; Fernandez, J. P.; Ferrazza, C.; Field, R.; Flanagan, G.; Forrest, R.; Frank, M. J.; Franklin, M.; Freeman, J. C.; Furic, I.; Gallinaro, M.; Galyardt, J.; Garcia, J. E.; Garfinkel, A. F.; Garosi, P.; Gerberich, H.; Gerchtein, E.; Giagu, S.; Giakoumopoulou, V.; Giannetti, P.; Gibson, K.; Ginsburg, C. M.; Giokaris, N.; Giromini, P.; Giunta, M.; Giurgiu, G.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldin, D.; Goldschmidt, N.; Golossanov, A.; Gomez, G.; Gomez-Ceballos, G.; Goncharov, M.; González, O.; Gorelov, I.; Goshaw, A. T.; Goulianos, K.; Gresele, A.; Grinstein, S.; Grosso-Pilcher, C.; Group, R. C.; Guimaraes da Costa, J.; Gunay-Unalan, Z.; Haber, C.; Hahn, S. R.; Halkiadakis, E.; Hamaguchi, A.; Han, J. Y.; Happacher, F.; Hara, K.; Hare, D.; Hare, M.; Harr, R. F.; Hatakeyama, K.; Hays, C.; Heck, M.; Heinrich, J.; Herndon, M.; Hewamanage, S.; Hidas, D.; Hocker, A.; Hopkins, W.; Horn, D.; Hou, S.; Hughes, R. E.; Hurwitz, M.; Husemann, U.; Hussain, N.; Hussein, M.; Huston, J.; Introzzi, G.; Iori, M.; Ivanov, A.; James, E.; Jang, D.; Jayatilaka, B.; Jeon, E. J.; Jha, M. K.; Jindariani, S.; Johnson, W.; Jones, M.; Joo, K. K.; Jun, S. Y.; Junk, T. R.; Kamon, T.; Karchin, P. E.; Kato, Y.; Ketchum, W.; Keung, J.; Khotilovich, V.; Kilminster, B.; Kim, D. H.; Kim, H. S.; Kim, H. W.; Kim, J. E.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kirby, M.; Klimenko, S.; Kondo, K.; Kong, D. J.; Konigsberg, J.; Kotwal, A. V.; Kreps, M.; Kroll, J.; Krop, D.; Krumnack, N.; Kruse, M.; Krutelyov, V.; Kuhr, T.; Kurata, M.; Kwang, S.; Laasanen, A. T.; Lami, S.; Lammel, S.; Lancaster, M.; Lander, R. L.; Lannon, K.; Lath, A.; Latino, G.; Lazzizzera, I.; Lecompte, T.; Lee, E.; Lee, H. S.; Lee, J. S.; Lee, S. W.; Leo, S.; Leone, S.; Lewis, J. D.; Lin, C.-J.; Linacre, J.; Lindgren, M.; Lipeles, E.; Lister, A.; Litvintsev, D. O.; Liu, C.; Liu, Q.; Liu, T.; Lockwitz, S.; Lockyer, N. S.; Loginov, A.; Lucchesi, D.; Lueck, J.; Lujan, P.; Lukens, P.; Lungu, G.; Lys, J.; Lysak, R.; Madrak, R.; Maeshima, K.; Makhoul, K.; Maksimovic, P.; Malik, S.; Manca, G.; Manousakis-Katsikakis, A.; Margaroli, F.; Marino, C.; Martínez, M.; Martínez-Ballarín, R.; Mastrandrea, P.; Mathis, M.; Mattson, M. E.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; McNulty, R.; Mehta, A.; Mehtala, P.; Menzione, A.; Mesropian, C.; Miao, T.; Mietlicki, D.; Mitra, A.; Miyake, H.; Moed, S.; Moggi, N.; Mondragon, M. N.; Moon, C. S.; Moore, R.; Morello, M. J.; Morlock, J.; Movilla Fernandez, P.; Mukherjee, A.; Muller, Th.; Murat, P.; Mussini, M.; Nachtman, J.; Nagai, Y.; Naganoma, J.; Nakano, I.; Napier, A.; Nett, J.; Neu, C.; Neubauer, M. S.; Nielsen, J.; Nodulman, L.; Norniella, O.; Nurse, E.; Oakes, L.; Oh, S. H.; Oh, Y. D.; Oksuzian, I.; Okusawa, T.; Orava, R.; Ortolan, L.; Pagan Griso, S.; Pagliarone, C.; Palencia, E.; Papadimitriou, V.; Paramonov, A. A.; Patrick, J.; Pauletta, G.; Paulini, M.; Paus, C.; Pellett, D. E.; Penzo, A.; Phillips, T. J.; Piacentino, G.; Pianori, E.; Pilot, J.; Pitts, K.; Plager, C.; Pondrom, L.

    2010-12-01

    A precision measurement of the top quark mass mt is obtained using a sample of tt¯ events from pp¯ collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of mt and a parameter ΔJES used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6fb-1 of integrated luminosity, a value of mt=173.0±1.2GeV/c2 is measured.

  8. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Brucken, E.; Devoto, F.; Mehtala, P.; Orava, R.; Alvarez Gonzalez, B.; Casal, B.; Gomez, G.; Palencia, E.; Rodrigo, T.; Ruiz, A.; Scodellaro, L.; Vila, I.; Vilar, R.; Amerio, S.; Dorigo, T.; Gresele, A.; Lazzizzera, I.; Amidei, D.; Campbell, M.

    2010-12-17

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6 fb{sup -1} of integrated luminosity, a value of m{sub t}=173.0{+-}1.2 GeV/c{sup 2} is measured.

  9. Ab initio quantum Monte Carlo simulations of the uniform electron gas without fixed nodes

    NASA Astrophysics Data System (ADS)

    Groth, S.; Schoof, T.; Dornheim, T.; Bonitz, M.

    2016-02-01

    The uniform electron gas (UEG) at finite temperature is of key relevance for many applications in the warm dense matter regime, e.g., dense plasmas and laser excited solids. Also, the quality of density functional theory calculations crucially relies on the availability of accurate data for the exchange-correlation energy. Recently, results for N =33 spin-polarized electrons at high density, rs=r ¯/aB≲4 , and low temperature have been obtained with the configuration path integral Monte Carlo (CPIMC) method [T. Schoof et al., Phys. Rev. Lett. 115, 130402 (2015), 10.1103/PhysRevLett.115.130402]. To achieve these results, the original CPIMC algorithm [T. Schoof et al., Contrib. Plasma Phys. 51, 687 (2011), 10.1002/ctpp.201100012] had to be further optimized to cope with the fermion sign problem (FSP). It is the purpose of this paper to give detailed information on the manifestation of the FSP in CPIMC simulations of the UEG and to demonstrate how it can be turned into a controllable convergence problem. In addition, we present new thermodynamic results for higher temperatures. Finally, to overcome the limitations of CPIMC towards strong coupling, we invoke an independent method—the recently developed permutation blocking path integral Monte Carlo approach [T. Dornheim et al., J. Chem. Phys. 143, 204101 (2015), 10.1063/1.4936145]. The combination of both approaches is able to yield ab initio data for the UEG over the entire density range, above a temperature of about one half of the Fermi temperature. Comparison with restricted path integral Monte Carlo data [E. W. Brown et al., Phys. Rev. Lett. 110, 146405 (2013), 10.1103/PhysRevLett.110.146405] allows us to quantify the systematic error arising from the free particle nodes.

  10. On scale invariant features and sequential Monte Carlo sampling for bronchoscope tracking

    NASA Astrophysics Data System (ADS)

    Luó, Xióngbiao; Feuerstein, Marco; Kitasaka, Takayuki; Natori, Hiroshi; Takabatake, Hirotsugu; Hasegawa, Yoshinori; Mori, Kensaku

    2011-03-01

    This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial bifurcations or folds and cannot automatically resume the tracking procedure after failures, which result usually from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters according to the observation model based on scale invariant feature-based camera motion recovery. We evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method can track a bronchoscope more accurate and robust than current state-of-the-art method, particularly increasing the tracking performance by 38.7% without using an additional position sensor.

  11. Protecting marine biodiversity to preserve ecosystem functioning: A tribute to Carlo Heip

    NASA Astrophysics Data System (ADS)

    Herman, Peter; Warwick, Richard; Aller, Robert; Arvanitidis, Christos; Hewitt, Judi; Stal, Lucas; Vincx, Magda

    2015-04-01

    Carlo Heip was the highly respected Editor-In-Chief of the Journal of Sea Research until his untimely death on 15 February 2013. As a tribute, the Journal wished to organize a special volume in his honour, the scope of which would provide an overview of the current state of affairs and the future outlook of marine biodiversity, a field of research to which Carlo made a major contribution. The volume places special emphasis on how marine biodiversity links to ecosystem functioning. Authors were invited to address such issues as: Which ecosystem functions are vulnerable to loss of biodiversity and how is the relation causally structured? How do trophic and non-trophic networks in ecosystems function and how do they depend on biodiversity? What is the role of spatial structuring for biodiversity? What is the role of biodiversity in biogeochemical fluxes at different scales? What are the new frontiers in the study of marine biodiversity and how can functional aspects be integrated in them? In this approach we wanted to cover a broad range of organisms reflecting Carlo's interests, the whole marine area from coastal systems to the deep sea, and spatial scales from single locations to worldwide databases.

  12. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  13. Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations

    SciTech Connect

    McClarren, R. G.; Urbatsch, T. J.

    2013-07-01

    We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)

  14. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing density functional theory (DFT) and quantum Monte Carlo (QMC) treatments. The method is applied to address the longstanding discrepancy between DFT calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, in contrast to DAC data.

  15. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    SciTech Connect

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.

  16. Transitions between imperfectly ordered crystalline structures: A phase switch Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Wilms, Dorothea; Wilding, Nigel B.; Binder, Kurt

    2012-05-01

    A model for two-dimensional colloids confined laterally by “structured boundaries” (i.e., ones that impose a periodicity along the slit) is studied by Monte Carlo simulations. When the distance D between the confining walls is reduced at constant particle number from an initial value D0, for which a crystalline structure commensurate with the imposed periodicity fits, to smaller values, a succession of phase transitions to imperfectly ordered structures occur. These structures have a reduced number of rows parallel to the boundaries (from n to n-1 to n-2, etc.) and are accompanied by an almost periodic strain pattern, due to “soliton staircases” along the boundaries. Since standard simulation studies of such transitions are hampered by huge hysteresis effects, we apply the phase switch Monte Carlo method to estimate the free energy difference between the structures as a function of the misfit between D and D0, thereby locating where the transitions occur in equilibrium. For comparison, we also obtain this free energy difference from a thermodynamic integration method: The results agree, but the effort required to obtain the same accuracy as provided by phase switch Monte Carlo would be at least three orders of magnitude larger. We also show for a situation where several “candidate structures” exist for a phase, that phase switch Monte Carlo can clearly distinguish the metastable structures from the stable one. Finally, applying the method in the conjugate statistical ensemble (where the normal pressure conjugate to D is taken as an independent control variable), we show that the standard equivalence between the conjugate ensembles of statistical mechanics is violated.

  17. Recent advances and future prospects for Monte Carlo

    SciTech Connect

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

  18. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  19. Exponential Monte Carlo Convergence on a Homogeneous Right Parallelepiped Using the Reduced Source Method with Legendre Expansion

    SciTech Connect

    Favorite, J.A.

    1999-09-01

    In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.

  20. A radiating shock evaluated using Implicit Monte Carlo Diffusion

    SciTech Connect

    Cleveland, M.; Gentile, N.

    2013-07-01

    Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)

  1. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  2. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  3. Accelerated Monte Carlo Methods for Coulomb Collisions

    NASA Astrophysics Data System (ADS)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce

    2014-03-01

    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  4. Monte Carlo Simulation of Critical Casimir Forces

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg A.

    2015-03-01

    In the vicinity of the second order phase transition point long-range critical fluctuations of the order parameter appear. The second order phase transition in a critical binary mixture in the vicinity of the demixing point belongs to the universality class of the Ising model. The superfluid transition in liquid He belongs to the universality class of the XY model. The confinement of long-range fluctuations causes critical Casimir forces acting on confining surfaces or particles immersed in the critical substance. Last decade critical Casimir forces in binary mixtures and liquid helium were studied experimentally. The critical Casimir force in a film of a given thickness scales as a universal scaling function of the ratio of the film thickness to the bulk correlation length divided over the cube of the film thickness. Using Monte Carlo simulations we can compute critical Casimir forces and their scaling functions for lattice Ising and XY models which correspond to experimental results for the binary mixture and liquid helium, respectively. This chapter provides the description of numerical methods for computation of critical Casimir interactions for lattice models for plane-plane, plane-particle, and particle-particle geometries.

  5. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-04-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  6. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  7. Error modes in implicit Monte Carlo

    SciTech Connect

    Martin, William Russell,; Brown, F. B.

    2001-01-01

    The Implicit Monte Carlo (IMC) method of Fleck and Cummings [1] has been used for years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Larsen and Mercier [2] have shown that the IMC method violates a maximum principle that is satisfied by the exact solution to the radiative transfer equation. Except for [2] and related papers regarding the maximum principle, there have been no other published results regarding the analysis of errors or convergence properties for the IMC method. This work presents an exact error analysis for the IMC method by using the analytical solutions for infinite medium geometry (0-D) to determine closed form expressions for the errors. The goal is to gain insight regarding the errors inherent in the IMC method by relating the exact 0-D errors to multi-dimensional geometry. Additional work (not described herein) has shown that adding a leakage term (i.e., a 'buckling' term) to the 0-D equations has relatively little effect on the IMC errors analyzed in this paper, so that the 0-D errors should provide useful guidance for the errors observed in multi-dimensional simulations.

  8. Improved method for implicit Monte Carlo

    SciTech Connect

    Brown, F. B.; Martin, W. R.

    2001-01-01

    The Implicit Monte Carlo (IMC) method has been used for over 30 years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Reference [2] provided an exact error analysis of IMC for 0-D problems and demonstrated that IMC can exhibit substantial errors when timesteps are large. These temporal errors are inherent in the method and are in addition to spatial discretization errors and approximations that address nonlinearities (due to variation of physical constants). In Reference [3], IMC and four other methods were analyzed in detail and compared on both theoretical grounds and the accuracy of numerical tests. As discussed in, two alternative schemes for solving the radiative transfer equations, the Carter-Forest (C-F) method and the Ahrens-Larsen (A-L) method, do not exhibit the errors found in IMC; for 0-D, both of these methods are exact for all time, while for 3-D, A-L is exact for all time and C-F is exact within a timestep. These methods can yield substantially superior results to IMC.

  9. Monte Carlo Production Management at CMS

    NASA Astrophysics Data System (ADS)

    Boudoul, G.; Franzoni, G.; Norkus, A.; Pol, A.; Srimanobhas, P.; Vlimant, J.-R.

    2015-12-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.

  10. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  11. Monte Carlo simulation of chromatin stretching.

    PubMed

    Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90 degrees to 130 degrees increases the stiffness significantly. An increase in the opening angle from 22 degrees to 34 degrees leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending. PMID:16711856

  12. Monte Carlo simulation of chromatin stretching

    NASA Astrophysics Data System (ADS)

    Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.

  13. Extending canonical Monte Carlo methods: II

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-04-01

    We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.

  14. Computing Entanglement Entropy in Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Melko, Roger

    2012-02-01

    The scaling of entanglement entropy in quantum many-body wavefunctions is expected to be a fruitful resource for studying quantum phases and phase transitions in condensed matter. However, until the recent development of estimators for Renyi entropy in quantum Monte Carlo (QMC), we have been in the dark about the behaviour of entanglement in all but the simplest two-dimensional models. In this talk, I will outline the measurement techniques that allow access to the Renyi entropies in several different QMC methodologies. I will then discuss recent simulation results demonstrating the richness of entanglement scaling in 2D, including: the prevalence of the ``area law''; topological entanglement entropy in a gapped spin liquid; anomalous subleading logarithmic terms due to Goldstone modes; universal scaling at critical points; and examples of emergent conformal-like scaling in several gapless wavefunctions. Finally, I will explore the idea that ``long range entanglement'' may complement the notion of ``long range order'' for quantum phases and phase transitions which lack a conventional order parameter description.

  15. Linear Scaling Quantum Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Williamson, Andrew

    2002-03-01

    New developments to the quantum Monte Carlo approach are presented that improve the scaling of the time required to calculate the total energy of a configuration of electronic coordinates from N^3 to nearly linear[1]. The first factor of N is achieved by applying a unitary transform to the set of single particle orbitals used to construct the Slater determinant, creating a set of maximally localized Wannier orbitals. These localized functions are then truncated beyond a given cutoff radius to introduce sparsity into the Slater determinant. The second factor of N is achieved by evaluating the maximally localized Wannier orbitals on a cubic spline grid, which removes the size dependence of the basis set (e.g. plane waves, Gaussians) typically used to expand the orbitals. Application of this method to the calculation of the binding energy of carbon fullerenes and silicon nanostructures will be presented. An extension of the approach to deal with excited states of systems will also be presented in the context of the calculation of the excitonic gap of a variety of systems. This work was performed under the auspices of the U.S. Dept. of Energy at the University of California/LLNL under contract no. W-7405-Eng-48. [1] A.J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87 246406 (2001)

  16. Monte Carlo simulation framework for TMT

    NASA Astrophysics Data System (ADS)

    Vogiatzis, Konstantinos; Angeli, George Z.

    2008-07-01

    This presentation describes a strategy for assessing the performance of the Thirty Meter Telescope (TMT). A Monte Carlo Simulation Framework has been developed to combine optical modeling with Computational Fluid Dynamics simulations (CFD), Finite Element Analysis (FEA) and controls to model the overall performance of TMT. The framework consists of a two year record of observed environmental parameters such as atmospheric seeing, site wind speed and direction, ambient temperature and local sunset and sunrise times, along with telescope azimuth and elevation with a given sampling rate. The modeled optical, dynamic and thermal seeing aberrations are available in a matrix form for distinct values within the range of influencing parameters. These parameters are either part of the framework parameter set or can be derived from them at each time-step. As time advances, the aberrations are interpolated and combined based on the current value of their parameters. Different scenarios can be generated based on operating parameters such as venting strategy, optical calibration frequency and heat source control. Performance probability distributions are obtained and provide design guidance. The sensitivity of the system to design, operating and environmental parameters can be assessed in order to maximize the % of time the system meets the performance specifications.

  17. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  18. Integrated Means Integrity

    ERIC Educational Resources Information Center

    Odegard, John D.

    1978-01-01

    Describes the operation of the Cessna Pilot Center (CPC) flight training systems. The program is based on a series of integrated activities involving stimulus, response, reinforcement and association components. Results show that the program can significantly reduce in-flight training time. (CP)

  19. Extension of the fully coupled Monte Carlo/S sub N response matrix method to problems including upscatter and fission

    SciTech Connect

    Baker, R.S.; Filippone, W.F. . Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. )

    1991-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

  20. DETERMINING UNCERTAINTY IN PHYSICAL PARAMETER MEASUREMENTS BY MONTE CARLO SIMULATION

    EPA Science Inventory

    A statistical approach, often called Monte Carlo Simulation, has been used to examine propagation of error with measurement of several parameters important in predicting environmental transport of chemicals. These parameters are vapor pressure, water solubility, octanol-water par...

  1. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  2. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  3. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    SciTech Connect

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Monte Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.

  4. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  5. Monte Carlo techniques for analyzing deep penetration problems

    SciTech Connect

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  6. Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

    SciTech Connect

    Bekar, Kursat B; Celik, Cihangir; Wiarda, Dorothea; Peplow, Douglas E.; Rearden, Bradley T; Dunn, Michael E

    2013-01-01

    Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

  7. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  8. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    SciTech Connect

    Pandya, Tara M; Johnson, Seth R; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P

    2015-01-01

    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  9. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  10. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  11. SCALE Monte Carlo Eigenvalue Methods and New Advancements

    SciTech Connect

    Goluoglu, Sedat; Leppanen, Jaakko; Petrie Jr, Lester M; Dunn, Michael E

    2010-01-01

    SCALE code system is developed and maintained by Oak Ridge National Laboratory to perform criticality safety, reactor analysis, radiation shielding, and spent fuel characterization for nuclear facilities and transportation/storage package designs. SCALE is a modular code system that includes several codes which use either Monte Carlo or discrete ordinates solution methodologies for solving relevant neutral particle transport equations. This paper describes some of the key capabilities of the Monte Carlo criticality safety codes within the SCALE code system.

  12. A Particle Population Control Method for Dynamic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

    2014-06-01

    A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

  13. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  14. Monte Carlo Hybrid Applied to Binary Stochastic Mixtures

    Energy Science and Technology Software Center (ESTSC)

    2008-08-11

    The purpose of this set of codes isto use an inexpensive, approximate deterministic flux distribution to generate weight windows, wihich will then be used to bound particle weights for the Monte Carlo code run. The process is not automated; the user must run the deterministic code and use the output file as a command-line argument for the Monte Carlo code. Two sets of text input files are included as test problems/templates.

  15. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  16. Quantum Monte Carlo calculations on positronium compounds

    NASA Astrophysics Data System (ADS)

    Jiang, Nan

    The stability of compounds containing one or more positrons in addition to electrons and nuclei has been the focus of extensive scientific investigations. Interest in these compounds stems from the important role they play in the process of positron annihilation, which has become a useful technique in material science studies. Knowledge of these compounds comes mostly from calculations which are presently less difficult than laboratory experiments. Owing to the small binding energies of these compounds, quantum chemistry methods beyond the molecular orbital approximation must be used. Among them, the quantum Monte Carlo (QMC) method is most appealing because it is easy to implement, gives exact results within the fixed nodes approximation, and makes good use of existing approximate wavefunctions. Applying QMC to small systems like PsH for binding energy calculation is straightforward. To apply it to systems with heavier atoms, to systems for which the center-of-mass motion needs to be separated, and to calculate annihilation rates, special techniques must be developed. In this project a detailed study and several advancements to the QMC method are carried out. Positronium compounds PsH, Ps2, PsO, and Ps2O are studied with algorithms we developed. Results for PsH and Ps2 agree with the best accepted to date. Results for PsO confirm the stability of this compound, and are in fair agreement with an earlier calculation. Results for Ps2O establish the stability of this compound and give an approximate annihilation rate for the first time. Discussions will include an introduction to QMC methods, an in-depth discussion on the QMC formalism, presentation of new algorithms developed in this study, and procedures and results of QMC calculations on the above mentioned positronium compounds.

  17. Extending Diffusion Monte Carlo to Internal Coordinates

    NASA Astrophysics Data System (ADS)

    Petit, Andrew S.; McCoy, Anne B.

    2013-06-01

    Diffusion Monte Carlo (DMC) is a powerful technique for studying the properties of molecules and clusters that undergo large-amplitude, zero-point vibrational motions. However, the overall applicability of the method is limited by the need to work in Cartesian coordinates and therefore have available a full-dimensional potential energy surface (PES). As a result, the development of a reduced-dimensional DMC methodology has the potential to significantly extend the range of problems that DMC can address by allowing the calculations to be performed in the subset of coordinates that is physically relevant to the questions being asked, thereby eliminating the need for a full-dimensional PES. As a first step towards this goal, we describe here an internal coordinate extension of DMC that places no constraints on the choice of internal coordinates other than requiring them all to be independent. Using H_3^+ and its isotopologues as model systems, we demonstrate that the methodology is capable of successfully describing the ground state properties of highly fluxional molecules as well as, in conjunction with the fixed-node approximation, the ν=1 vibrationally excited states. The calculations of the fundamentals of H_3^+ and its isotopologues provided general insights into the properties of the nodal surfaces of vibrationally excited states. Specifically, we will demonstrate that analysis of ground state probability distributions can point to the set of coordinates that are less strongly coupled and therefore more suitable for use as nodal coordinates in the fixed-node approximation. In particular, we show that nodal surfaces defined in terms of the curvilinear normal mode coordinates are reasonable for the fundamentals of H_2D^+ and D_2H^+ despite both molecules being highly fluxional.

  18. Monte Carlo simulation of scenario probability distributions

    SciTech Connect

    Glaser, R.

    1996-10-23

    Suppose a scenario of interest can be represented as a series of events. A final result R may be viewed then as the intersection of three events, A, B, and C. The probability of the result P(R) in this case is the product P(R) = P(A) P(B {vert_bar} A) P(C {vert_bar} A {intersection} B). An expert may be reluctant to estimate P(R) as a whole yet agree to supply his notions of the component probabilities in the form of prior distributions. Each component prior distribution may be viewed as the stochastic characterization of the expert`s uncertainty regarding the true value of the component probability. Mathematically, the component probabilities are treated as independent random variables and P(R) as their product; the induced prior distribution for P(R) is determined which characterizes the expert`s uncertainty regarding P(R). It may be both convenient and adequate to approximate the desired distribution by Monte Carlo simulation. Software has been written for this task that allows a variety of component priors that experts with good engineering judgment might feel comfortable with. The priors are mostly based on so-called likelihood classes. The software permits an expert to choose for a given component event probability one of six types of prior distributions, and the expert specifies the parameter value(s) for that prior. Each prior is unimodal. The expert essentially decides where the mode is, how the probability is distributed in the vicinity of the mode, and how rapidly it attenuates away. Limiting and degenerate applications allow the expert to be vague or precise.

  19. Lattice Monte Carlo simulations of polymer melts

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-12-01

    We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.

  20. Monte Carlo Simulations for Spinodal Decomposition

    NASA Astrophysics Data System (ADS)

    Sander, Evelyn; Wanner, Thomas

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, we are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u 0≡ μ in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u 0. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, we numerically compare these two mathematical approaches. In fact, we are able to synthesize the understanding we gain from our numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, we can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. Our approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter ɛ of the governing equation. We give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. We observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. We explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. We also describe the dynamics of these exceptional solutions.

  1. Monte Carlo simulations for spinodal decomposition

    SciTech Connect

    Sander, E.; Wanner, T.

    1999-06-01

    This paper addresses the phenomenon of spinodal decomposition for the Cahn-Hilliard equation. Namely, the authors are interested in why most solutions to the Cahn-Hilliard equation which start near a homogeneous equilibrium u{sub 0} {equivalent_to} {mu} in the spinodal interval exhibit phase separation with a characteristic wavelength when exiting a ball of radius R in a Hilbert space centered at u{sub 0}. There are two mathematical explanations for spinodal decomposition, due to Grant and to Maier-Paape and Wanner. In this paper, the authors numerically compare these two mathematical approaches. In fact, they are able to synthesize the understanding they gain from the numerics with the approach of Maier-Paape and Wanner, leading to a better understanding of the underlying mechanism for this behavior. With this new approach, they can explain spinodal decomposition for a longer time and larger radius than either of the previous two approaches. A rigorous mathematical explanation is contained in a separate paper. The approach is to use Monte Carlo simulations to examine the dependence of R, the radius to which spinodal decomposition occurs, as a function of the parameter {var_epsilon} of the governing equation. The authors give a description of the dominating regions on the surface of the ball by estimating certain densities of the distributions of the exit points. They observe, and can show rigorously, that the behavior of most solutions originating near the equilibrium is determined completely by the linearization for an unexpectedly long time. They explain the mechanism for this unexpectedly linear behavior, and show that for some exceptional solutions this cannot be observed. They also describe the dynamics of these exceptional solutions.

  2. Monte Carlo study of microdosimetric diamond detectors.

    PubMed

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-21

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy. PMID:26309235

  3. Monte Carlo study of microdosimetric diamond detectors

    NASA Astrophysics Data System (ADS)

    Solevi, Paola; Magrin, Giulio; Moro, Davide; Mayer, Ramona

    2015-09-01

    Ion-beam therapy provides a high dose conformity and increased radiobiological effectiveness with respect to conventional radiation-therapy. Strict constraints on the maximum uncertainty on the biological weighted dose and consequently on the biological weighting factor require the determination of the radiation quality, defined as the types and energy spectra of the radiation at a specific point. However the experimental determination of radiation quality, in particular for an internal target, is not simple and the features of ion interactions and treatment delivery require dedicated and optimized detectors. Recently chemical vapor deposition (CVD) diamond detectors have been suggested as ion-beam therapy microdosimeters. Diamond detectors can be manufactured with small cross sections and thin shapes, ideal to cope with the high fluence rate. However the sensitive volume of solid state detectors significantly deviates from conventional microdosimeters, with a diameter that can be up to 1000 times the height. This difference requires a redefinition of the concept of sensitive thickness and a deep study of the secondary to primary radiation, of the wall effects and of the impact of the orientation of the detector with respect to the radiation field. The present work intends to study through Monte Carlo simulations the impact of the detector geometry on the determination of radiation quality quantities, in particular on the relative contribution of primary and secondary radiation. The dependence of microdosimetric quantities such as the unrestricted linear energy L and the lineal energy y are investigated for different detector cross sections, by varying the particle type (carbon ions and protons) and its energy.

  4. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  5. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  6. Concepts for fast large scale Monte Carlo production for the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Debenedetti, C.; Atlas Collaboration

    2014-06-01

    The huge success of Run 1 of the LHC would not have been possible without detailed detector simulation of the experiments. The outstanding performance of the accelerator with a delivered integrated luminosity of 25 fb-1 has created an unprecedented demand for large simulated event samples. This has stretched the possibilities of the experiments due to the constraint of their computing infrastructure and available resources. Modern, concurrent computing techniques optimised for new processor hardware are being exploited to boost future computing resources, but even the most optimistic scenarios predict that additional action needs to be taken to guarantee sufficient Monte Carlo production statistics for high quality physics results during Run 2. In recent years, the ATLAS collaboration has put dedicated effort in the development of a new Integrated Simulation Framework (ISF) that allows running full and fast simulation approaches in parallel and even within one event. We present the main concepts of the ISF, which allows a fine-tuned detector simulation targeted at specific physics cases with a decrease in CPU time per event by orders of magnitude. Additionally, we will discuss the implications of a customised simulation in terms of validity and accuracy and will present new concepts in digitization and reconstruction to achieve a fast Monte Carlo chain with a per event execution time of a few seconds.

  7. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  8. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  9. A detection method of vegetable oils in edible blended oil based on three-dimensional fluorescence spectroscopy technique.

    PubMed

    Xu, Jing; Liu, Xiao-Fei; Wang, Yu-Tian

    2016-12-01

    Edible blended vegetable oils are made from two or more refined oils. Blended oils can provide a wider range of essential fatty acids than single vegetable oils, which helps support good nutrition. Nutritional components in blended oils are related to the type and content of vegetable oils used, and a new, more accurate, method is proposed to identify and quantify the vegetable oils present using cluster analysis and a Quasi-Monte Carlo integral. Three-dimensional fluorescence spectra were obtained at 250-400nm (excitation) and 260-750nm (emission). Mixtures of sunflower, soybean and peanut oils were used as typical examples to validate the effectiveness of the method. PMID:27374508

  10. Top Quark Mass Measurement in the Lepton + Jets Channel Using a Matrix Element Method and \\textit{in situ} Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Apresyan, A.; Arisawa, T.; /Waseda U. /Dubna, JINR

    2010-10-01

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of t{bar t} events from p{bar p} collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events, a value of m{sub t} = 173.0 {+-} 1.2 GeV/c{sup 2} is measured.

  11. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  12. Reverse Monte Carlo ray-tracing for radiative heat transfer in combustion systems

    NASA Astrophysics Data System (ADS)

    Sun, Xiaojing

    Radiative heat transfer is a dominant heat transfer phenomenon in high temperature systems. With the rapid development of massive supercomputers, the Monte-Carlo ray tracing (MCRT) method starts to see its applications in combustion systems. This research is to find out if Monte-Carlo ray tracing can offer more accurate and efficient calculations than the discrete ordinates method (DOM). Monte-Carlo ray tracing method is a statistical method that traces the history of a bundle of rays. It is known as solving radiative heat transfer with almost no approximation. It can handle nonisotropic scattering and nongray gas mixtures with relative ease compared to conventional methods, such as DOM and spherical harmonics method, etc. There are two schemes in Monte-Carlo ray tracing method: forward and backward/reverse. Case studies and the governing equations demonstrate the advantages of reverse Monte-Carlo ray tracing (RMCRT) method. The RMCRT can be easily implemented for domain decomposition parallelism. In this dissertation, different efficiency improvements techniques for RMCRT are introduced and implemented. They are the random number generator, stratified sampling, ray-surface intersection calculation, Russian roulette, and important sampling. There are two major modules in solving the radiative heat transfer problems: the RMCRT RTE solver and the optical property models. RMCRT is first fully verified in gray, scattering, absorbing and emitting media with black/nonblack, diffuse/nondiffuse bounded surface problems. Sensitivity analysis is carried out with regard to the ray numbers, the mesh resolutions of the computational domain, optical thickness of the media and effects of variance reduction techniques (stratified sampling, Russian roulette). Results are compared with either analytical solutions or benchmark results. The efficiency (the product of error and computation time) of RMCRT has been compared to DOM and suggest great potential for RMCRT's application

  13. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the

  14. Influence of measurement geometry on the estimate of {sup 131}I activity in the thyroid: Monte Carlo simulation of a detector and a phantom

    SciTech Connect

    Ulanovsky, A.V.; Minenko, V.F.; Korneev, S.V.

    1997-01-01

    An approach for evaluating the influence of measurement geometry on estimates of {sup 131}I in the thyroid from measurements with survey meters was developed using Monte Carlo simulation of radiation transport in the human body and the radiation detector. The modified Monte Carlo code, EGS4, including a newly developed mathematical model of detector, thyroid gland, and neck, was used for the computations. The approach was tested by comparing calculated and measured differential and integral detector characteristics. This procedure was applied to estimate uncertainties in direct thyroid-measurement results due to geometrical errors. 14 refs., 11 figs., 4 tabs.

  15. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  16. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  17. Reliability Impact of Stockpile Aging: Stress Voiding

    SciTech Connect

    ROBINSON,DAVID G.

    1999-10-01

    The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution.

  18. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  19. Monte Carlo evaluation of kerma in an HDR brachytherapy bunker.

    PubMed

    Pérez-Calatayud, J; Granero, D; Ballester, F; Casal, E; Crispin, V; Puchades, V; León, A; Verdú, G

    2004-12-21

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater. PMID:15724543

  20. TOPICAL REVIEW: Monte Carlo modelling of external radiotherapy photon beams

    NASA Astrophysics Data System (ADS)

    Verhaegen, Frank; Seuntjens, Jan

    2003-11-01

    An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources.

  1. Monte Carlo modelling of external radiotherapy photon beams.

    PubMed

    Verhaegen, Frank; Seuntjens, Jan

    2003-11-01

    An essential requirement for successful radiation therapy is that the discrepancies between dose distributions calculated at the treatment planning stage and those delivered to the patient are minimized. An important component in the treatment planning process is the accurate calculation of dose distributions. The most accurate way to do this is by Monte Carlo calculation of particle transport, first in the geometry of the external or internal source followed by tracking the transport and energy deposition in the tissues of interest. Additionally, Monte Carlo simulations allow one to investigate the influence of source components on beams of a particular type and their contaminant particles. Since the mid 1990s, there has been an enormous increase in Monte Carlo studies dealing specifically with the subject of the present review, i.e., external photon beam Monte Carlo calculations, aided by the advent of new codes and fast computers. The foundations for this work were laid from the late 1970s until the early 1990s. In this paper we will review the progress made in this field over the last 25 years. The review will be focused mainly on Monte Carlo modelling of linear accelerator treatment heads but sections will also be devoted to kilovoltage x-ray units and 60Co teletherapy sources. PMID:14653555

  2. Monte Carlo treatment planning for photon and electron beams

    NASA Astrophysics Data System (ADS)

    Reynaert, N.; van der Marck, S. C.; Schaart, D. R.; Van der Zee, W.; Van Vliet-Vroegindeweij, C.; Tomsej, M.; Jansen, J.; Heijmen, B.; Coghe, M.; De Wagter, C.

    2007-04-01

    During the last few decades, accuracy in photon and electron radiotherapy has increased substantially. This is partly due to enhanced linear accelerator technology, providing more flexibility in field definition (e.g. the usage of computer-controlled dynamic multileaf collimators), which led to intensity modulated radiotherapy (IMRT). Important improvements have also been made in the treatment planning process, more specifically in the dose calculations. Originally, dose calculations relied heavily on analytic, semi-analytic and empirical algorithms. The more accurate convolution/superposition codes use pre-calculated Monte Carlo dose "kernels" partly accounting for tissue density heterogeneities. It is generally recognized that the Monte Carlo method is able to increase accuracy even further. Since the second half of the 1990s, several Monte Carlo dose engines for radiotherapy treatment planning have been introduced. To enable the use of a Monte Carlo treatment planning (MCTP) dose engine in clinical circumstances, approximations have been introduced to limit the calculation time. In this paper, the literature on MCTP is reviewed, focussing on patient modeling, approximations in linear accelerator modeling and variance reduction techniques. An overview of published comparisons between MC dose engines and conventional dose calculations is provided for phantom studies and clinical examples, evaluating the added value of MCTP in the clinic. An overview of existing Monte Carlo dose engines and commercial MCTP systems is presented and some specific issues concerning the commissioning of a MCTP system are discussed.

  3. NOTE: A Monte Carlo study of dose rate distribution around the specially asymmetric CSM3-a 137Cs source

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Lliso, F.; Ballester, F.; Serrano, M. A.; Lluch, J. L.; Limami, Y.; Puchades, V.; Casal, E.

    2001-07-01

    The CSM3 137Cs type stainless-steel encapsulated source is widely used in manually afterloaded low dose rate brachytherapy. A specially asymmetric source, CSM3-a, has been designed by CIS Bio International (France) substituting the eyelet side seed with an inactive material in the CSM3 source. This modification has been done in order to allow a uniform dose level over the upper vaginal surface when this `linear' source is inserted at the top of the dome vaginal applicators. In this study the Monte Carlo GEANT3 simulation code, incorporating the source geometry in detail, was used to investigate the dosimetric characteristics of this special CSM3-a 137Cs brachytherapy source. The absolute dose rate distribution in water around this source was calculated and is presented in the form of an along-away table. Comparison of Sievert integral type calculations with Monte Carlo results are discussed.

  4. A Monte Carlo simulation based two-stage adaptive resonance theory mapping approach for offshore oil spill vulnerability index classification.

    PubMed

    Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth

    2014-09-15

    In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. PMID:25044043

  5. Development of a method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations

    SciTech Connect

    Mallett, M.W.; Poston, J.W.; Hickman, D.P.

    1995-06-01

    Research efforts towards developing a new method for calibrating in vivo measurement systems using magnetic resonance imaging (MRI) and Monte Carlo computations are discussed. The method employs the enhanced three-point Dixon technique for producing pure fat and pure water MR images of the human body. The MR images are used to define the geometry and composition of the scattering media for transport calculations using the general-purpose Monte Carlo code MCNP, Version 4. A sample case for developing the new method utilizing an adipose/muscle matrix is compared with laboratory measurements. Verification of the integrated MRI-MCNP method has been done for a specially designed phantom composed of fat, water, air, and a bone-substitute material. Implementation of the MRI-MCNP method is demonstrated for a low-energy, lung counting in vivo measurement system. Limitations and solutions regarding the presented method are discussed. 15 refs., 7 figs., 4 tabs.

  6. A Monte Carlo tool for raster-scanning particle therapy dose computation

    NASA Astrophysics Data System (ADS)

    Jelen, U.; Radon, M.; Santiago, A.; Wittig, A.; Ammazzalorso, F.

    2014-03-01

    Purpose of this work was to implement Monte Carlo (MC) dose computation in realistic patient geometries with raster-scanning, the most advanced ion beam delivery technique, combining magnetic beam deflection with energy variation. FLUKA, a Monte Carlo package well-established in particle therapy applications, was extended to simulate raster-scanning delivery with clinical data, unavailable as built-in feature. A new complex beam source, compatible with FLUKA public programming interface, was implemented in Fortran to model the specific properties of raster-scanning, i.e. delivery by means of multiple spot sources with variable spatial distributions, energies and numbers of particles. The source was plugged into the MC engine through the user hook system provided by FLUKA. Additionally, routines were provided to populate the beam source with treatment plan data, stored as DICOM RTPlan or TRiP98's RST format, enabling MC recomputation of clinical plans. Finally, facilities were integrated to read computerised tomography (CT) data into FLUKA. The tool was used to recompute two representative carbon ion treatment plans, a skull base and a prostate case, prepared with analytical dose calculation (TRiP98). Selected, clinically relevant issues influencing the dose distributions were investigated: (1) presence of positioning errors, (2) influence of fiducial markers and (3) variations in pencil beam width. Notable differences in modelling of these challenging situations were observed between the analytical and Monte Carlo results. In conclusion, a tool was developed, to support particle therapy research and treatment, when high precision MC calculations are required, e.g. in presence of severe density heterogeneities or in quality assurance procedures.

  7. Phase coexistence in heterogeneous porous media: a new extension to Gibbs ensemble Monte Carlo simulation method.

    PubMed

    Puibasset, Joël

    2005-04-01

    The effect of confinement on phase behavior of simple fluids is still an area of intensive research. In between experiment and theory, molecular simulation is a powerful tool to study the effect of confinement in realistic porous materials, containing some disorder. Previous simulation works aiming at establishing the phase diagram of a confined Lennard-Jones-type fluid, concentrated on simple pore geometries (slits or cylinders). The development of the Gibbs ensemble Monte Carlo technique by Panagiotopoulos [Mol. Phys. 61, 813 (1987)], greatly favored the study of such simple geometries for two reasons. First, the technique is very efficient to calculate the phase diagram, since each run (at a given temperature) converges directly to an equilibrium between a gaslike and a liquidlike phase. Second, due to volume exchange procedure between the two phases, at least one invariant direction of space is required for applicability of this method, which is the case for slits or cylinders. Generally, the introduction of some disorder in such simple pores breaks the initial invariance in one of the space directions and prevents to work in the Gibbs ensemble. The simulation techniques for such disordered systems are numerous (grand canonical Monte Carlo, molecular dynamics, histogram reweighting, N-P-T+test method, Gibbs-Duhem integration procedure, etc.). However, the Gibbs ensemble technique, which gives directly the coexistence between phases, was never generalized to such systems. In this work, we focus on two weakly disordered pores for which a modified Gibbs ensemble Monte Carlo technique can be applied. One of the pores is geometrically undulated, whereas the second is cylindrical but presents a chemical variation which gives rise to a modulation of the wall potential. In the first case almost no change in the phase diagram is observed, whereas in the second strong modifications are reported. PMID:15847492

  8. An Extension of Implicit Monte Carlo Diffusion: Multigroup and The Difference Formulation

    SciTech Connect

    Cleveland, M A; Gentile, N; Palmer, T S

    2010-04-19

    Implicit Monte Carlo (IMC) and Implicit Monte Carlo Diffusion (IMD) are approaches to the numerical solution of the equations of radiative transfer. IMD was previously derived and numerically tested on grey, or frequency-integrated problems. In this research, we extend Implicit Monte Carlo Diffusion (IMD) to account for frequency dependence, and we implement the difference formulation as a source manipulation variance reduction technique. We derive the relevant probability distributions and present the frequency dependent IMD algorithm, with and without the difference formulation. The IMD code with and without the difference formulation was tested using both grey and frequency dependent benchmark problems. The Su and Olson semi-analytic Marshak wave benchmark was used to demonstrate the validity of the code for grey problems. The Su and Olson semi-analytic picket fence benchmark was used for the frequency dependent problems. The frequency dependent IMD algorithm reproduces the results of both Su and Olson benchmark problems. Frequency group refinement studies indicate that the computational cost of refining the group structure is likely less than that of group refinement in deterministic solutions of the radiation diffusion methods. Our results show that applying the difference formulation to the IMD algorithm can result in an overall increase in the figure of merit for frequency dependent problems. However, the creation of negatively weighted particles from the difference formulation can cause significant numerical instabilities in regions of the problem with sharp spatial gradients in the solution. An adaptive implementation of the difference formulation may be necessary to focus its use in regions that are at or near thermal equilibrium.

  9. An extension of implicit Monte Carlo diffusion: Multigroup and the difference formulation

    NASA Astrophysics Data System (ADS)

    Cleveland, Mathew A.; Gentile, Nick A.; Palmer, Todd S.

    2010-08-01

    Implicit Monte Carlo (IMC) and Implicit Monte Carlo Diffusion (IMD) are approaches to the numerical solution of the equations of radiative transfer. IMD was previously derived and numerically tested on grey, or frequency-integrated problems [1]. In this research, we extend Implicit Monte Carlo Diffusion (IMD) to account for frequency dependence, and we implement the difference formulation[2] as a source manipulation variance reduction technique. We derive the relevant probability distributions and present the frequency dependent IMD algorithm, with and without the difference formulation. The IMD code with and without the difference formulation was tested using both grey and frequency dependent benchmark problems. The Su and Olson semi-analytic Marshak wave benchmark was used to demonstrate the validity of the code for grey problems [3]. The Su and Olson semi-analytic picket fence benchmark was used for the frequency dependent problems [4]. The frequency dependent IMD algorithm reproduces the results of both Su and Olson benchmark problems. Frequency group refinement studies indicate that the computational cost of refining the group structure is likely less than that of group refinement in deterministic solutions of the radiation diffusion methods. Our results show that applying the difference formulation to the IMD algorithm can result in an overall increase in the figure of merit for frequency dependent problems. However, the creation of negatively weighted particles from the difference formulation can cause significant numerical instabilities in regions of the problem with sharp spatial gradients in the solution. An adaptive implementation of the difference formulation may be necessary to focus its use in regions that are at or near thermal equilibrium.

  10. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  11. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  12. Quantitative Molecular Thermochemistry Based on Path Integrals

    SciTech Connect

    Glaesemann, K R; Fried, L E

    2005-03-14

    The calculation of thermochemical data requires accurate molecular energies and heat capacities. Traditional methods rely upon the standard harmonic normal mode analysis to calculate the vibrational and rotational contributions. We utilize path integral Monte Carlo (PIMC) for going beyond the harmonic analysis, to calculate the vibrational and rotational contributions to ab initio energies. This is an application and extension of a method previously developed in our group.

  13. Improving dynamical lattice QCD simulations through integrator tuning using Poisson brackets and a force-gradient integrator

    SciTech Connect

    Clark, M. A.; Joo, Balint; Kennedy, A. D.; Silva, P. J.

    2011-10-01

    We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H-tilde. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.

  14. Backward and Forward Monte Carlo Method in Polarized Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-01

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  15. Monte Carlo techniques for real-time quantum dynamics

    SciTech Connect

    Dowling, Mark R. . E-mail: dowling@physics.uq.edu.au; Davis, Matthew J.; Drummond, Peter D.; Corney, Joel F.

    2007-01-10

    The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.

  16. Skin image reconstruction using Monte Carlo based color generation

    NASA Astrophysics Data System (ADS)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji

    2010-11-01

    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  17. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  18. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  19. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  20. Macro Monte Carlo for dose calculation of proton beams.

    PubMed

    Fix, Michael K; Frei, Daniel; Volken, Werner; Born, Ernst J; Aebersold, Daniel M; Manser, Peter

    2013-04-01

    Although the Monte Carlo (MC) method allows accurate dose calculation for proton radiotherapy, its usage is limited due to long computing time. In order to gain efficiency, a new macro MC (MMC) technique for proton dose calculations has been developed. The basic principle of the MMC transport is a local to global MC approach. The local simulations using GEANT4 consist of mono-energetic proton pencil beams impinging perpendicularly on slabs of different thicknesses and different materials (water, air, lung, adipose, muscle, spongiosa, cortical bone). During the local simulation multiple scattering, ionization as well as elastic and inelastic interactions have been taken into account and the physical characteristics such as lateral displacement, direction distributions and energy loss have been scored for primary and secondary particles. The scored data from appropriate slabs is then used for the stepwise transport of the protons in the MMC simulation while calculating the energy loss along the path between entrance and exit position. Additionally, based on local simulations the radiation transport of neutrons and the generated ions are included into the MMC simulations for the dose calculations. In order to validate the MMC transport, calculated dose distributions using the MMC transport and GEANT4 have been compared for different mono-energetic proton pencil beams impinging on different phantoms including homogeneous and inhomogeneous situations as well as on a patient CT scan. The agreement of calculated integral depth dose curves is better than 1% or 1 mm for all pencil beams and phantoms considered. For the dose profiles the agreement is within 1% or 1 mm in all phantoms for all energies and depths. The comparison of the dose distribution calculated using either GEANT4 or MMC in the patient also shows an agreement of within 1% or 1 mm. The efficiency of MMC is up to 200 times higher than for GEANT4. The very good level of agreement in the dose comparisons

  1. Macro Monte Carlo for dose calculation of proton beams

    NASA Astrophysics Data System (ADS)

    Fix, Michael K.; Frei, Daniel; Volken, Werner; Born, Ernst J.; Aebersold, Daniel M.; Manser, Peter

    2013-04-01

    Although the Monte Carlo (MC) method allows accurate dose calculation for proton radiotherapy, its usage is limited due to long computing time. In order to gain efficiency, a new macro MC (MMC) technique for proton dose calculations has been developed. The basic principle of the MMC transport is a local to global MC approach. The local simulations using GEANT4 consist of mono-energetic proton pencil beams impinging perpendicularly on slabs of different thicknesses and different materials (water, air, lung, adipose, muscle, spongiosa, cortical bone). During the local simulation multiple scattering, ionization as well as elastic and inelastic interactions have been taken into account and the physical characteristics such as lateral displacement, direction distributions and energy loss have been scored for primary and secondary particles. The scored data from appropriate slabs is then used for the stepwise transport of the protons in the MMC simulation while calculating the energy loss along the path between entrance and exit position. Additionally, based on local simulations the radiation transport of neutrons and the generated ions are included into the MMC simulations for the dose calculations. In order to validate the MMC transport, calculated dose distributions using the MMC transport and GEANT4 have been compared for different mono-energetic proton pencil beams impinging on different phantoms including homogeneous and inhomogeneous situations as well as on a patient CT scan. The agreement of calculated integral depth dose curves is better than 1% or 1 mm for all pencil beams and phantoms considered. For the dose profiles the agreement is within 1% or 1 mm in all phantoms for all energies and depths. The comparison of the dose distribution calculated using either GEANT4 or MMC in the patient also shows an agreement of within 1% or 1 mm. The efficiency of MMC is up to 200 times higher than for GEANT4. The very good level of agreement in the dose comparisons

  2. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system

  3. Monte Carlo Form-Finding Method for Tensegrity Structures

    NASA Astrophysics Data System (ADS)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  4. Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems

    NASA Astrophysics Data System (ADS)

    Svistunov, Boris

    2013-03-01

    In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA

  5. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  6. Mesh Optimization for Monte Carlo-Based Optical Tomography

    PubMed Central

    Edmans, Andrew; Intes, Xavier

    2015-01-01

    Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523

  7. A Monte Carlo method for combined segregation and linkage analysis.

    PubMed Central

    Guo, S W; Thompson, E A

    1992-01-01

    We introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. PMID:1415253

  8. Enhancements for Multi-Player Monte-Carlo Tree Search

    NASA Astrophysics Data System (ADS)

    Nijssen, J. (Pim) A. M.; Winands, Mark H. M.

    Monte-Carlo Tree Search (MCTS) is becoming increasingly popular for playing multi-player games. In this paper we propose two enhancements for MCTS in multi-player games: (1) Progressive History and (2) Multi-Player Monte-Carlo Tree Search Solver (MP-MCTS-Solver). We analyze the performance of these enhancements in two different multi-player games: Focus and Chinese Checkers. Based on the experimental results we conclude that Progressive History is a considerable improvement in both games and MP-MCTS-Solver, using the standard update rule, is a genuine improvement in Focus.

  9. Overview of the MCU Monte Carlo Software Package

    NASA Astrophysics Data System (ADS)

    Kalugin, M. A.; Oleynik, D. S.; Shkarovsky, D. A.

    2014-06-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented.

  10. Carlos Leffler Inc. - still growing after more than 50 years

    SciTech Connect

    Not Available

    1993-02-01

    In 1941, Carlos, R. Leffler, then 17 years old, bought his first truck with his life savings. He used it to transport fertilizer, coal and milk to the farmers of Lebanon County, PA. With a reputation for reliability, gained from his efforts with this first unit, he was able to expand his activities to fueloil delivery. In 1945, Leffler moved the business to Richland, PA, which is still the company's hometown, and embarked on the course of growth, which is still the company's hallmark. Today, Carlos R. Leffler, Inc. serves customers in 45 out the 67 counties of Pennsylvania as well as customers in New Jersey, Maryland and Delaware.

  11. Monte Carlo simulations of phosphate polyhedron connectivity in glasses

    SciTech Connect

    ALAM,TODD M.

    2000-01-01

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  12. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  13. Oversight and Development of a Community Monte Carlo Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Under this grant we have developed a Monte Carlo radiative transfer code that will act as the nucleus for the 13RC Community Monte Carlo Model. All code is written in ANSI-compliant Fortran-95. Many modules define public type and procedures to manipulate them, but do not allow access to the type's internal components. This allows each module to do its own exhaustive error checking up-front, then proceed in a streamlined way. Many modules can read and write the state of their objects to persistent files. The code has been tested on a Macintosh running OS 10.2.4 and the Absoft Fortran compiler, and on Sun UltraSparcs running Solaris 5.8 and Forte V8 compilers. The code exposed bugs in the Intel Fortran Compiler (ifc) on the 13RC Linux host, and we are waiting for a resolution of these bugs before finished the port. the code base is under CVS versions control. The code base consists of the core code (nine modules providing the infrastructure), example integrators, and a suite of utilities and examples.

  14. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  15. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  16. GORRAM: Introducing accurate operational-speed radiative transfer Monte Carlo solvers

    NASA Astrophysics Data System (ADS)

    Buras-Schnell, Robert; Schnell, Franziska; Buras, Allan

    2016-06-01

    We present a new approach for solving the radiative transfer equation in horizontally homogeneous atmospheres. The motivation was to develop a fast yet accurate radiative transfer solver to be used in operational retrieval algorithms for next generation meteorological satellites. The core component is the program GORRAM (Generator Of Really Rapid Accurate Monte-Carlo) which generates solvers individually optimized for the intended task. These solvers consist of a Monte Carlo model capable of path recycling and a representative set of photon paths. Latter is generated using the simulated annealing technique. GORRAM automatically takes advantage of limitations on the variability of the atmosphere. Due to this optimization the number of photon paths necessary for accurate results can be reduced by several orders of magnitude. For the shown example of a forward model intended for an aerosol satellite retrieval, comparison with an exact yet slow solver shows that a precision of better than 1% can be achieved with only 36 photons. The computational time is at least an order of magnitude faster than any other type of radiative transfer solver. Merely the lookup table approach often used in satellite retrieval is faster, but on the other hand suffers from limited accuracy. This makes GORRAM-generated solvers an eligible candidate as forward model in operational-speed retrieval algorithms and data assimilation applications. GORRAM also has the potential to create fast solvers of other integrable equations.

  17. Monte Carlo Mean Field Treatment of Microbunching Instability in the FERMI@Elettra First Bunch Compressor

    SciTech Connect

    Bassi, G.; Ellison, J.A.; Heinemann, K.; Warnock, R.; /SLAC

    2009-05-07

    Bunch compressors, designed to increase the peak current, can lead to a microbunching instability with detrimental effects on the beam quality. This is a major concern for free electron lasers (FELs) where very bright electron beams are required, i.e. beams with low emittance and energy spread. In this paper, we apply our self-consistent, parallel solver to study the microbunching instability in the first bunch compressor system of FERMI{at}Elettra. Our basic model is a 2D Vlasov-Maxwell system. We treat the beam evolution through a bunch compressor using our Monte Carlo mean field approximation. We randomly generate N points from an initial phase space density. We then calculate the charge density using a smooth density estimation procedure, from statistics, based on Fourier series. The electric and magnetic fields are calculated from the smooth charge/current density using a novel field formula that avoids singularities by using the retarded time as a variable of integration. The points are then moved forward in small time steps using the beam frame equations of motion, with the fields frozen during a time step, and a new charge density is determined using our density estimation procedure. We try to choose N large enough so that the charge density is a good approximation to the density that would be obtained from solving the 2D Vlasov-Maxwell system exactly. We call this method the Monte Carlo Particle (MCP) method.

  18. Kinetic Monte Carlo Simulation of Oxygen and Cation Diffusion in Yttria-Stabilized Zirconia

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2011-01-01

    Yttria-stabilized zirconia (YSZ) is of interest to the aerospace community, notably for its application as a thermal barrier coating for turbine engine components. In such an application, diffusion of both oxygen ions and cations is of concern. Oxygen diffusion can lead to deterioration of a coated part, and often necessitates an environmental barrier coating. Cation diffusion in YSZ is much slower than oxygen diffusion. However, such diffusion is a mechanism by which creep takes place, potentially affecting the mechanical integrity and phase stability of the coating. In other applications, the high oxygen diffusivity of YSZ is useful, and makes the material of interest for use as a solid-state electrolyte in fuel cells. The kinetic Monte Carlo (kMC) method offers a number of advantages compared with the more widely known molecular dynamics simulation method. In particular, kMC is much more efficient for the study of processes, such as diffusion, that involve infrequent events. We describe the results of kinetic Monte Carlo computer simulations of oxygen and cation diffusion in YSZ. Using diffusive energy barriers from ab initio calculations and from the literature, we present results on the temperature dependence of oxygen and cation diffusivity, and on the dependence of the diffusivities on yttria concentration and oxygen sublattice vacancy concentration. We also present results of the effect on diffusivity of oxygen vacancies in the vicinity of the barrier cations that determine the oxygen diffusion energy barriers.

  19. In Vivo Measurement System Calibration Using Magnetic Resonance Imaging and Monte Carlo Computations

    NASA Astrophysics Data System (ADS)

    Mallett, Michael Wesley

    1993-01-01

    A new method for calibrating in vivo measurement systems using magnetic resonance imaging and Monte Carlo computations is presented. The method employs the enhanced three-point Dixon technique for producing pure fat and water images of the human body. This information is used to model the scattering media for transport calculations using the current version of the MCNP code (version 4). Development work utilizing a sample fat/water matrix compared well with laboratory measurements. Calibration of an in vivo measurement system using the BOMAB phantom, as compared with Monte Carlo modeling of this procedure, is presented as verification of the MCNP code. Verification of the integrated MRI-MCNP method is shown for a specially designed phantom composed of fat, water, air, and a bone substitute material (acrylic plastic). Implementation of the MRI-MCNP method is demonstrated for an in vivo measurement system. Failures inherent to the current method are discussed, including the inability of the imaging technique to explicitly discriminate between air and bone tissue, and the presence of mismapping errors within the pure fat/water images. Post processing techniques performed on the three-point Dixon images are demonstrated as a potential means of resolving these problems. A modified version of the MCNP code specifically for handling MRI data is also discussed.

  20. Teaching Integrity

    ERIC Educational Resources Information Center

    Saunders, Sue; Butts, Jennifer Lease

    2011-01-01

    Integrity is one of those essential yet highly ambiguous concepts. For the purpose of this chapter, integrity is defined as that combination of both attributes and actions that makes entities appear to be whole and ethical, as well as consistent. Like the concepts of leadership or wisdom or community or collaboration, integrity is a key element of…

  1. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  2. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms

    NASA Astrophysics Data System (ADS)

    Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian

    2016-03-01

    In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes.

  3. Lattice gauge theories and Monte Carlo algorithms

    SciTech Connect

    Creutz, M.

    1988-10-01

    Lattice gauge theory has become the primary tool for non-perturbative calculations in quantum field theory. These lectures review some of the foundations of this subject. The first lecture reviews the basic definition of the theory in terms of invariant integrals over group elements on lattice bonds. The lattice represents an ultraviolet cutoff, and renormalization group arguments show how the bare coupling must be varied to obtain the continuum limit. Expansions in the inverse of the coupling constant demonstrate quark confinement in the strong coupling limit. The second lecture turns to numerical simulation, which has become an important approach to calculating hadronic properties. Here I discuss the basic algorithms for obtaining appropriately weighted gauge field configurations. The third lecture turns to algorithms for treating fermionic fields, which still require considerably more computer time than needed for purely bosonic simulations. Some particularly promising recent approaches are based on global accept-reject steps and should display a rather favorable dependence of computer time on the system volume. 34 refs.

  4. Integrated Quantum/Classical Modeling of Hydrogenic Materials

    SciTech Connect

    CURRO,JOHN G.; VAN SWOL,FRANK B.; FYE,RICHARD M.; WANG,Q.; JOHNSON,J.K.; PATRA,C.; YETHIRAJ,A.

    1999-11-01

    Path integral Monte Carlo simulations and calculations were performed on molecular hydrogen liquids. The equation-of-state, internal energies, and vapor liquid phase diagrams from simulation were found to be in quantitative agreement with experiments. Analytical calculations were performed on,H2 liquids using integral equation methods to study the degree of localization of the hydrogen molecules. Very little self-trapping or localization was found as a function of temperature and density. Good qualitative agreement was found between the integral equation calculations and the quantum Monte Carlo simulations for the radius of gyration of the hydrogen molecules. Path integral simulations were also performed on molecular hydrogen on graphite surfaces, slit pores, and in carbon nanotubes. Significant quantum effects on the adsorption of hydrogen were observed.

  5. Monte Carlo studies of nuclei and quantum liquid drops

    SciTech Connect

    Pandharipande, V.R.; Pieper, S.C.

    1989-01-01

    The progress in application of variational and Green's function Monte Carlo methods to nuclei is reviewed. The nature of single-particle orbitals in correlated quantum liquid drops is discussed, and it is suggested that the difference between quasi-particle and mean-field orbitals may be of importance in nuclear structure physics. 27 refs., 7 figs., 2 tabs.

  6. A Monte Carlo Study of Six Models of Change.

    ERIC Educational Resources Information Center

    Corder-Bolz, Charles R.

    A Monte Carlo Study was conducted to evaluate six models commonly used to evaluate change. The results revealed specific problems with each. Analysis of covariance and analysis of variance of residualized gain scores appeared to substantially and consistently overestimate the change effects. Multiple factor analysis of variance models utilizing…

  7. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  8. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  9. A Conversation with Native Flutist R. Carlos Nakai.

    ERIC Educational Resources Information Center

    Simonelli, Richard

    1992-01-01

    R. Carlos Nakai discusses his personal development as a musician, his interest in keeping the Native flute tradition alive in a modern way, his ethic of service, the purpose of higher education for Indian students, the relation of education to life, and the role of Indian people in the sciences. (SV)

  10. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  11. A Monte Carlo photocurrent/photoemission computer program

    NASA Technical Reports Server (NTRS)

    Chadsey, W. L.; Ragona, C.

    1972-01-01

    A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.

  12. Applications of the Monte Carlo radiation transport toolkit at LLNL

    NASA Astrophysics Data System (ADS)

    Sale, Kenneth E.; Bergstrom, Paul M., Jr.; Buck, Richard M.; Cullen, Dermot; Fujino, D.; Hartmann-Siantar, Christine

    1999-09-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions.

  13. Monte Carlo capabilities of the SCALE code system

    DOE PAGESBeta

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; et al

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  14. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  15. Automated variance reduction for Monte Carlo shielding analyses with MCNP

    NASA Astrophysics Data System (ADS)

    Radulescu, Georgeta

    Variance reduction techniques are employed in Monte Carlo analyses to increase the number of particles in the space phase of interest and thereby lower the variance of statistical estimation. Variance reduction parameters are required to perform Monte Carlo calculations. It is well known that adjoint solutions, even approximate ones, are excellent biasing functions that can significantly increase the efficiency of a Monte Carlo calculation. In this study, an automated method of generating Monte Carlo variance reduction parameters, and of implementing the source energy biasing and the weight window technique in MCNP shielding calculations has been developed. The method is based on the approach used in the SAS4 module of the SCALE code system, which derives the biasing parameters from an adjoint one-dimensional Discrete Ordinates calculation. Unlike SAS4 that determines the radial and axial dose rates of a spent fuel cask in separate calculations, the present method provides energy and spatial biasing parameters for the entire system that optimize the simulation of particle transport towards all external surfaces of a spent fuel cask. The energy and spatial biasing parameters are synthesized from the adjoint fluxes of three one-dimensional Discrete Ordinates adjoint calculations. Additionally, the present method accommodates multiple source regions, such as the photon sources in light-water reactor spent nuclear fuel assemblies, in one calculation. With this automated method, detailed and accurate dose rate maps for photons, neutrons, and secondary photons outside spent fuel casks or other containers can be efficiently determined with minimal efforts.

  16. Reagents for Electrophilic Amination: A Quantum Monte CarloStudy

    SciTech Connect

    Amador-Bedolla, Carlos; Salomon-Ferrer, Romelia; Lester Jr.,William A.; Vazquez-Martinez, Jose A.; Aspuru-Guzik, Alan

    2006-11-01

    Electroamination is an appealing synthetic strategy toconstruct carbon-nitrogen bonds. We explore the use of the quantum MonteCarlo method and a proposed variant of the electron-pair localizationfunction--the electron-pair localization function density--as a measureof the nucleophilicity of nitrogen lone-pairs as a possible screeningprocedure for electrophilic reagents.

  17. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  18. Microbial contamination in poultry chillers estimated by Monte Carlo simulations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The risk of microbial contamination during poultry processing may be reduced by the operating characteristics of the chiller. The performance of air chillers and immersion chillers were compared in terms of pre-chill and post-chill contamination using Monte Carlo simulations. Three parameters were u...

  19. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  20. Rocket plume radiation base heating by reverse Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Everson, John; Nelson, H. F.

    1993-10-01

    A reverse Monte Carlo radiative transfer code is developed to predict rocket plume base heating. It is more computationally efficient than the forward Monte Carlo method, because only the radiation that strikes the receiving point is considered. The method easily handles both gas and particle emission and particle scattering. Band models are used for the molecular emission spectra, and the Henyey-Greenstein phase function is used for the scattering. Reverse Monte Carlo predictions are presented for (1) a gas-only model of the Space Shuttle main engine plume; (2) a purescattering plume with the radiation emitted by a hot disk at the nozzle exit; (3) a nonuniform temperature, scattering, emitting and absorbing plume; and (4) a typical solid rocket motor plume. The reverse Monte Carlo method is shown to give good agreement with previous predictions. Typical solid rocket plume results show that (1) CO2 radiation is emitted from near the edge of the plume; (2) H2O gas and Al2O3 particles emit radiation mainly from the center of the plume; and (3) Al2O3 particles emit considerably more radiation than the gases over the 400-17,000 cm(exp -1) spectral interval.

  1. Monte Carlo study of the atmospheric spread function

    NASA Technical Reports Server (NTRS)

    Pearce, W. A.

    1986-01-01

    Monte Carlo radiative transfer simulations are used to study the atmospheric spread function appropriate to satellite-based sensing of the earth's surface. The parameters which are explored include the nadir angle of view, the size distribution of the atmospheric aerosol, and the aerosol vertical profile.

  2. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  3. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  4. Monte Carlo: in the beginning and some great expectations

    SciTech Connect

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.

  5. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  6. Searching for Patterns: A Conversation with Carlos Cortes.

    ERIC Educational Resources Information Center

    Carnes, Jim

    1999-01-01

    Reports results of a telephone interview with history professor and social analyst Carlos Cortes, whose current main area of interest is the multicultural education that occurs outside the classroom, with special interest in the role of the media. Explores the concept of the societal curriculum. (SLD)

  7. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    SciTech Connect

    Koning, A.J.

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  8. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  9. Monte Carlo radiation transport: A revolution in science

    SciTech Connect

    Hendricks, J.

    1993-04-01

    When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science.

  10. APS undulator and wiggler sources: Monte-Carlo simulation

    SciTech Connect

    Xu, S.L.; Lai, B.; Viccaro, P.J.

    1992-02-01

    Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).

  11. Recent developments in Monte-Carlo Event Generators

    NASA Astrophysics Data System (ADS)

    Schönherr, Marek

    2016-07-01

    With Run II of the LHC having started, the need for high precision theory predictions whose uncertainty matches that of the data to be taken necessitated a range of new developments in Monte-Carlo Event Generators. This talk will give an overview of the progress in recent years in the field and what can and cannot be expected from these newly written tools.

  12. Does standard Monte Carlo give justice to instantons?

    NASA Astrophysics Data System (ADS)

    Fucito, F.; Solomon, S.

    1984-01-01

    The results of the standard local Monte Carlo are changed by offering instantons as candidates in the Metropolis procedure. We also define an O(3) topological charge with no contribution from planar dislocations. The RG behavior is still not recovered. Bantrell Fellow in Theoretical Physics.

  13. Monte Carlo event generators for hadron-hadron collisions

    SciTech Connect

    Knowles, I.G.; Protopopescu, S.D.

    1993-06-01

    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  14. Observations on variational and projector Monte Carlo methods

    SciTech Connect

    Umrigar, C. J.

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  15. Ordinal Hypothesis in ANOVA Designs: A Monte Carlo Study.

    ERIC Educational Resources Information Center

    Braver, Sanford L.; Sheets, Virgil L.

    Numerous designs using analysis of variance (ANOVA) to test ordinal hypotheses were assessed using a Monte Carlo simulation. Each statistic was computed on each of over 10,000 random samples drawn from a variety of population conditions. The number of groups, population variance, and patterns of population means were varied. In the non-null…

  16. Monte Carlo Simulations of Light Propagation in Apples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports on the investigation of light propagation in fresh apples in the visible and short-wave near-infrared region using Monte Carlo simulations. Optical properties of ‘Golden Delicious’ apples were determined over the spectral range of 500-1100 nm using a hyperspectral imaging method, ...

  17. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  18. Topics in structural dynamics: Nonlinear unsteady transonic flows and Monte Carlo methods in acoustics

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1974-01-01

    The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

  19. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  20. Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.

    PubMed

    Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L

    2013-01-01

    It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406