Science.gov

Sample records for quasi-monte carlo integration

  1. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Lazopoulos, Achilleas

    2006-07-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  2. Precision measurement of the top quark mass in the lepton + jets channel using a matrix element method with Quasi-Monte Carlo integration

    SciTech Connect

    Lujan, Paul Joseph

    2009-12-01

    This thesis presents a measurement of the top quark mass obtained from p$\\bar{p}$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t$\\bar{t}$ likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t$\\bar{t}$ likelihood as a 2-D function of the top pole mass mt and ΔJES, where ΔJES parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing ΔJES into the likelihood, we can use the information contained in W boson decays to constrain ΔJES and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb-1 of integrated luminosity, requiring events with a lepton, large ET, and exactly four high-energy jets in the pseudorapidity range |η| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure mt = 172.6 ± 0.9 (stat.) ± 0.7 (JES) ± 1.1 (syst.) GeV/c2, or mt = 172.6 ± 1.6 (tot.) GeV/c2.

  3. Quasi-Monte Carlo methods for lattice systems: A first look

    NASA Astrophysics Data System (ADS)

    Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

    2014-03-01

    We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte

  4. The accuracy of prostate volume measurement from ultrasound images: a quasi-Monte Carlo simulation study using magnetic resonance imaging.

    PubMed

    Azulay, David-Olivier D; Murphy, Philip; Graham, Jim

    2013-01-01

    Prostate volume is an important parameter to guide management of patients with benign prostatic hyperplasia (BPH) and to deliver clinical trial endpoints. Generally, simple 2D ultrasound (US) approaches are favoured despite the potential for greater accuracy afforded by magnetic resonance imaging (MRI) or complex US procedures. In this study, different approaches to estimate prostate size are evaluated with a simulation to select multiple organ cross-sections and diameters from 22 MRI-defined prostate shapes. A quasi-Monte Carlo (qMC) approach is used to simulate multiple probe positions and angles within prescribed limits resulting in a range of dimensions. The basic ellipsoid calculation which uses two scanning planes compares well to the MRI volume across the range of prostate shapes and sizes (R=0.992). However, using an appropriate linear regression model, accurate volume estimates can be made using prostate diameters calculated from a single scanning plane.

  5. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  6. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  7. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  8. A Classroom Note on Monte Carlo Integration.

    ERIC Educational Resources Information Center

    Kolpas, Sid

    1998-01-01

    The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)

  9. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  10. The Monte Carlo calculation of integral radiation dose in xeromammography.

    PubMed

    Dance, D R

    1980-01-01

    A Monte Carlo computer program has been developed for the computation of integral radiation dose to the breast in xeromammography. The results are given in terms of the integral dose per unit area of the breast per unit incident exposure. The calculations have been made for monoenergetic incident photons and the results integrated over a variety of X-ray spectra from both tungsten and molybdenum targets. This range incorporates qualities used in conventional and xeromammography. The program includes the selenium plate used in xeroradiography; the energy absorbed in this detector has also been investigated. The latter calculations have been used to predict relative values of exposure and of integral dose to the breast for xeromammograms taken at various radiation qualities. The results have been applied to recent work on the reduction of patient exposure in xeromammography by the addition of aluminium filters to the X-ray beam.

  11. Path integral Monte Carlo on a lattice. II. Bound states

    NASA Astrophysics Data System (ADS)

    O'Callaghan, Mark; Miller, Bruce N.

    2016-07-01

    The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the qp and the atoms are restricted to sites on a one-dimensional lattice. A path integral formalism developed within the context of the canonical ensemble is utilized, where the qp is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp in the context of a square well potential, forcing the qp to occupy bound states. We consider a one-dimensional square well potential where all atoms on the lattice are occupied with one atom with an on-site potential except for a contiguous set of sites of various lengths centered at the middle of the lattice. Comparison of the potential energy, the energy fluctuations, and the correlation function are made between the results of the Monte Carlo simulations and the numerical calculations.

  12. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  13. A Preliminary Study of In-House Monte Carlo Simulations: An Integrated Monte Carlo Verification System

    SciTech Connect

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hidek; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki

    2009-10-01

    Purpose: To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. Methods and Materials: The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. Results: The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Conclusions: Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  14. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  15. a Study of Electron Transport in Small Semiconductor Devices: the Monte Carlo Trajectory Integral Method

    NASA Astrophysics Data System (ADS)

    Socha, John Bronn

    The first part of this thesis contains a historical perspective on the last five years of research in hot-electron transport in semiconductors. This perspective serves two purposes. First, it provides a motivation for the second part of this thesis, which deals with calculating the full velocity distribution function of hot electrons. And second, it points out many of the unsolved theoretical problems that might be solved with the techniques developed in the second part. The second part of this thesis contains a derivation of a new method for calculating velocity distribution functions. This method, the Monte Carlo trajectory integral, is well suited for calculating the time evolution of a distribution function in the presence of complicated scattering mechanisms, like scattering with acoustic and optical phonons, inter-valley scattering, Bragg reflections, and even electron-electron scattering. This method uses many of the techniques develped for Monte Carlo transport calculations, but unlike other Monte Carlo methods, the Monte Carlo trajectory integral has very good control over the variance of the calculated distribution function across the entire distribution function. Since the Monte Carlo trajectory integral only needs information on the distribution function at previous times, it is well suited to electron-electron scattering where the distribution function must be known before the scattering rate can be calculated. Finally, this thesis ends with an application of the Monte Carlo trajectory integral to electron transport in SiO(,2) in the presence of electric fields up to 12 MV/cm, and it includes a number of suggestions for applying the Monte Carlo trajectory integral to other experiments in both SiO(,2) and GaAs. The Monte Carlo trajectory integral should be of special interest when super-computers are more common since then there will be the computing resources to include electron-electron scattering. The high-field distribution functions calculated when

  16. The integration of improved Monte Carlo compton scattering algorithms into the Integrated TIGER Series.

    SciTech Connect

    Quirk, Thomas, J., IV

    2004-08-01

    The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

  17. Monte Carlo Method for Solving the Fredholm Integral Equations of the Second Kind

    NASA Astrophysics Data System (ADS)

    ZhiMin, Hong; ZaiZai, Yan; JianRui, Chen

    2012-12-01

    This article is concerned with a numerical algorithm for solving approximate solutions of Fredholm integral equations of the second kind with random sampling. We use Simpson's rule for solving integral equations, which yields a linear system. The Monte Carlo method, based on the simulation of a finite discrete Markov chain, is employed to solve this linear system. To show the efficiency of the method, we use numerical examples. Results obtained by the present method indicate that the method is an effective alternate method.

  18. Color path-integral Monte Carlo simulations of quark-gluon plasma

    NASA Astrophysics Data System (ADS)

    Filinov, V. S.; Ivanov, Yu. B.; Bonitz, M.; Fortov, V. E.; Levashov, P. R.

    2012-02-01

    Thermodynamic properties of a strongly coupled quark-gluon plasma (QGP) of constituent quasiparticles are studied by a color path-integral Monte Carlo simulations (CPIMC). For our simulations we have presented QGP partition function in the form of color path integral with new relativistic measure instead of Gaussian one used in Feynman and Wiener path integrals. For integration over color variable we have also developed procedure of sampling color variables according to the group SU(3) Haar measure. It is shown that this method is able to reproduce the available quantum lattice chromodynamics (QCD) data.

  19. On the ground state calculation of a many-body system using a self-consistent basis and quasi-Monte Carlo: an application to water hexamer.

    PubMed

    Georgescu, Ionuţ; Jitomirskaya, Svetlana; Mandelshtam, Vladimir A

    2013-11-28

    Given a quantum many-body system, the Self-Consistent Phonons (SCP) method provides an optimal harmonic approximation by minimizing the free energy. In particular, the SCP estimate for the vibrational ground state (zero temperature) appears to be surprisingly accurate. We explore the possibility of going beyond the SCP approximation by considering the system Hamiltonian evaluated in the harmonic eigenbasis of the SCP Hamiltonian. It appears that the SCP ground state is already uncoupled to all singly- and doubly-excited basis functions. So, in order to improve the SCP result at least triply-excited states must be included, which then reduces the error in the ground state estimate substantially. For a multidimensional system two numerical challenges arise, namely, evaluation of the potential energy matrix elements in the harmonic basis, and handling and diagonalizing the resulting Hamiltonian matrix, whose size grows rapidly with the dimensionality of the system. Using the example of water hexamer we demonstrate that such calculation is feasible, i.e., constructing and diagonalizing the Hamiltonian matrix in a triply-excited SCP basis, without any additional assumptions or approximations. Our results indicate particularly that the ground state energy differences between different isomers (e.g., cage and prism) of water hexamer are already quite accurate within the SCP approximation.

  20. Approximation of Integrals Via Monte Carlo Methods, With An Application to Calculating Radar Detection Probabilities

    DTIC Science & Technology

    2005-03-01

    synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross

  1. Permutation blocking path integral Monte Carlo approach to the uniform electron gas at finite temperature.

    PubMed

    Dornheim, Tobias; Schoof, Tim; Groth, Simon; Filinov, Alexey; Bonitz, Michael

    2015-11-28

    The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for N = 33 electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, rs=r¯/aB≳1. These data have been complemented by configuration path integral Monte Carlo (CPIMC) simulations for rs ≤ 1 that substantially deviate from RPIMC towards smaller rs and low temperature. In this work, we present results from an independent third method-the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim et al., New J. Phys. 17, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to half the Fermi temperature (θ = kBT/EF = 0.5) and, therefore, to compare our results to both aforementioned methods. While we find excellent agreement with CPIMC, where results are available, we observe deviations from RPIMC that are beyond the statistical errors and increase with density.

  2. Time-dependent integral equations of neutron transport for calculating the kinetics of nuclear reactors by the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Davidenko, V. D.; Zinchenko, A. S.; Harchenko, I. K.

    2016-12-01

    Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.

  3. Quantum Mechanical Single Molecule Partition Function from PathIntegral Monte Carlo Simulations

    SciTech Connect

    Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian

    2006-10-01

    An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented.

  4. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  5. MASSCLEANCOLORS-MASS-DEPENDENT INTEGRATED COLORS FOR STELLAR CLUSTERS DERIVED FROM 30 MILLION MONTE CARLO SIMULATIONS

    SciTech Connect

    Popescu, Bogdan; Hanson, M. M. E-mail: margaret.hanson@uc.edu

    2010-04-10

    We present Monte Carlo models of open stellar clusters with the purpose of mapping out the behavior of integrated colors with mass and age. Our cluster simulation package allows for stochastic variations in the stellar mass function to evaluate variations in integrated cluster properties. We find that UBVK colors from our simulations are consistent with simple stellar population (SSP) models, provided the cluster mass is large, M {sub cluster} {>=} 10{sup 6} M {sub sun}. Below this mass, our simulations show two significant effects. First, the mean value of the distribution of integrated colors moves away from the SSP predictions and is less red, in the first 10{sup 7} to 10{sup 8} years in UBV colors, and for all ages in (V - K). Second, the 1{sigma} dispersion of observed colors increases significantly with lower cluster mass. We attribute the former to the reduced number of red luminous stars in most of the lower mass clusters and the latter to the increased stochastic effect of a few of these stars on lower mass clusters. This latter point was always assumed to occur, but we now provide the first public code able to quantify this effect. We are completing a more extensive database of magnitudes and colors as a function of stellar cluster age and mass that will allow the determination of the correlation coefficients among different bands, and improve estimates of cluster age and mass from integrated photometry.

  6. Path Integral Monte Carlo finite-temperature electronic structure of quantum dots

    NASA Astrophysics Data System (ADS)

    Leino, Markku; Rantala, Tapio T.

    2003-03-01

    Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We apply the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in a single and double quantum dots. With this approach we evaluate the electronic distributions and correlations, and finite temperature effects on those. Temperature increase broadens the one-electron distribution as expected. This effect is smaller for correlated electrons than for single ones. The simulated one and two electron distributions of a single and two coupled quantum dots are also compared to those from experiments and other theoretical (0 K) methods [3]. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. This and other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).

  7. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    NASA Astrophysics Data System (ADS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  8. An integrated Monte Carlo dosimetric verification system for radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Mizowaki, T.; Miyabe, Y.; Takegawa, H.; Narita, Y.; Yano, S.; Nagata, Y.; Teshima, T.; Hiraoka, M.

    2007-04-01

    An integrated Monte Carlo (MC) dose calculation system, MCRTV (Monte Carlo for radiotherapy treatment plan verification), has been developed for clinical treatment plan verification, especially for routine quality assurance (QA) of intensity-modulated radiotherapy (IMRT) plans. The MCRTV system consists of the EGS4/PRESTA MC codes originally written for particle transport through the accelerator, the multileaf collimator (MLC), and the patient/phantom, which run on a 28-CPU Linux cluster, and the associated software developed for the clinical implementation. MCRTV has an interface with a commercial treatment planning system (TPS) (Eclipse, Varian Medical Systems, Palo Alto, CA, USA) and reads the information needed for MC computation transferred in DICOM-RT format. The key features of MCRTV have been presented in detail in this paper. The phase-space data of our 15 MV photon beam from a Varian Clinac 2300C/D have been developed and several benchmarks have been performed under homogeneous and several inhomogeneous conditions (including water, aluminium, lung and bone media). The MC results agreed with the ionization chamber measurements to within 1% and 2% for homogeneous and inhomogeneous conditions, respectively. The MC calculation for a clinical prostate IMRT treatment plan validated the implementation of the beams and the patient/phantom configuration in MCRTV.

  9. WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM

    NASA Astrophysics Data System (ADS)

    Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.

    2012-12-01

    We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.

  10. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    SciTech Connect

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexciton binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.

  11. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    DOE PAGES

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less

  12. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  13. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    SciTech Connect

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  14. Path Integral Monte Carlo and Density Functional Molecular Dynamics Simulations of Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Militzer, Burkhard; Driver, Kevin

    2011-10-01

    We analyze the applicability of two first-principles simulation techniques, path integral Monte Carlo (PIMC) and density functional molecular dynamics (DFT-MD), to study the regime of warm dense matter. We discuss the advantages as well as the limitations of each method and propose directions for future development. Results for dense, liquid helium, where both methods have been applied, demonstrate the range of each method's applicability. Comparison of the equations of state from simulations with analytical theories and free energy models show that DFT is useful for temperatures below 100000 K and then PIMC provides accurate results for all higher temperatures. We characterize the structure of the liquid in terms of pair correlation functions and study the closure of the band gap with increasing density and temperature. Finally, we discuss simulations of heavier elements and demonstrate the reliability are both methods in such cases with preliminary results.

  15. Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature.

    PubMed

    Filinov, V S; Fortov, V E; Bonitz, M; Moldabekov, Zh

    2015-03-01

    The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a/aB--the ratio of the mean interparticle distance to the Bohr radius--approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N=33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4, and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs≲1.5, both RPIMC and DPIMC are problematic due to the increased fermion sign problem.

  16. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  17. Monte Carlo simulation and self-consistent integral equation theory for polymers in quenched random media.

    PubMed

    Sung, Bong June; Yethiraj, Arun

    2005-08-15

    The conformational properties and static structure of freely jointed hard-sphere chains in matrices composed of stationary hard spheres are studied using Monte Carlo simulations and integral equation theory. The simulations show that the chain size is a nonmonotonic function of the matrix density when the matrix spheres are the same size as the monomers. When the matrix spheres are of the order of the chain size the chain size decreases monotonically with increasing matrix volume fraction. The simulations are used to test the replica-symmetric polymer reference interaction site model (RSP) integral equation theory. When the simulation results for the intramolecular correlation functions are input into the theory, the agreement between theoretical predictions and simulation results for the pair-correlation functions is quantitative only at the highest fluid volume fractions and for small matrix sphere sizes. The RSP theory is also implemented in a self-consistent fashion, i.e., the intramolecular and intermolecular correlation functions are calculated self-consistently by combining a field theory with the integral equations. The theory captures qualitative trends observed in the simulations, such as the nonmonotonic dependence of the chain size on media fraction.

  18. Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

    SciTech Connect

    Miller, S.G.

    1988-08-01

    Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

  19. Monte Carlo simulation of small electron fields collimated by the integrated photon MLC

    NASA Astrophysics Data System (ADS)

    Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

    2011-02-01

    In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.

  20. Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach.

    PubMed

    Romano, Rocco; Paris, Debora; Acernese, Fausto; Barone, Fabrizio; Motta, Andrea

    2008-06-01

    Quantitative information from multi-dimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra.

  1. High-Throughput Computation and the Applicability of Monte Carlo Integration in Fatigue Load Estimation of Floating Offshore Wind Turbines

    SciTech Connect

    Graf, Peter A.; Stewart, Gordon; Lackner, Matthew; Dykes, Katherine; Veers, Paul

    2016-05-01

    Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, which is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.

  2. Monte Carlo solution of the volume-integral equation of electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Peltoniemi, J.; Muinonen, K.

    2014-07-01

    Electromagnetic scattering is often the main physical process to be understood when interpreting the observations of asteroids, comets, and meteors. Modeling the scattering faces still many problems, and one needs to assess several different cases: multiple scattering and shadowing by the rough surface, multiple scattering inside a surface element, and single scattering by a small object. Our specific goal is to extend the electromagnetic techniques to larger and more complicated objects, and derive approximations taking into account the most important effects of waves. Here we experiment with Monte Carlo techniques: can they provide something new to solving the scattering problems? The electromagnetic wave equation in the presence of a scatterer of volume V and refractive index m, with an incident wave EE_0, including boundary conditions and the scattering condition at infinity, can be presented in the form of an integral equation EE(rr)(1+suski(rr) Q(ρ))-int_{V-V_ρ}ddrr' GG(rr-rr')suski(rr')EE(rr') =EE_0, where suski(rr)=m(rr)^2-1, Q(ρ)=-1/3+{cal O}(ρ^2)+{O'}(m^2ρ^2), {O}, and {O'} are some second- and higher-order corrections for the finite-size volume V_ρ of radius ρ around the singularity and GG is the dyadic Green's function of the form GG(RR)={exp(im kR)}/{4π R}[unittensor(1+{im}/{R}-{1}/{R^2})-RRRR(1+{3im}/{R}-{3}/{R^2})]. In general, this is solved by extending the internal field in terms of some simple basis functions, e.g., plane or spherical waves or a cubic grid, approximating the integrals in a clever way, and determining the goodness of the solution somehow, e.g., moments or least square. Whatever the choice, the solution usually converges nicely towards a correct enough solution when the scatterer is small and simple, and diverges when the scatterer becomes too complicated. With certain methods, one can reach larger scatterers faster, but the memory and CPU needs can be huge. Until today, all successful solutions are based on more or less

  3. A path-integral Monte Carlo study of a small cluster: The Ar trimer

    NASA Astrophysics Data System (ADS)

    Pérez de Tudela, R.; Márquez-Mijares, M.; González-Lezana, T.; Roncero, O.; Miret-Artés, S.; Delgado-Barrio, G.; Villarreal, P.

    2010-06-01

    The Ar3 system has been studied between T =0 K and T =40 K by means of a path-integral Monte Carlo (PIMC) method. The behavior of the average energy in terms of the temperature has been explained by comparison with results obtained with the thermal averaged rovibrational spectra estimated via: (i) a quantum mechanical method based on distributed Gaussian functions for the interparticle distances and (ii) an analytical model which precisely accounts for the participation of the dissociative continua Ar2+Ar and Ar+Ar+Ar. Beyond T ˜20 K, the system explores floppier configurations than the rigid equilateral geometry, as linear and Ar-Ar2-like arrangements, and fragmentates around T ˜40 K. A careful investigation of the specific heat in terms of a confining radius in the PIMC calculation seems to discard a proper phase transition as in larger clusters, in apparent contradiction with previous reports of precise values for a liquid-gas transition. The onset of this noticeable change in the dynamics of the trimer occurs, however, at a remarkably low value of the temperature in comparison with Arn systems formed with more Ar atoms. Quantum mechanical effects are found of relevance at T ≤15 K, with both energies and radial distributions obtained with a quantum PIMC deviating from the corresponding classical results, thus precluding exclusively classical approaches for a precise description of the system at this low temperature range.

  4. Path Integral Monte Carlo Simulations of Warm Dense Plasmas with mid-Z Elements

    NASA Astrophysics Data System (ADS)

    Driver, Kevin; Soubiran, Francois; Zhang, Shuai; Militzer, Burkhard

    2016-10-01

    Theoretical studies of warm dense plasmas are crucial for improving our knowledge of giant planets, astrophysics, shock physics, and new plasma energy technologies, such as inertial confined fusion. Path integral Monte Carlo (PIMC) and density functional theory molecular dynamics (DFT-MD) provide consistent, first-principles descriptions of warm, dense matter over a wide range of density and temperature conditions. Here, we report simulation results for a variety of first- and second-row elements. DFT-MD algorithms are well-suited for low temperatures, while PIMC has been restricted to relatively high temperatures due to the free-particle approximation of the nodal surface. For heavier, second-row elements, we have developed a new, localized nodal surface, which allows us to treat bound states within the PIMC formalism. By combining PIMC and DFT-MD pressures and internal energies, we produce a coherent, first-principles equation of state, bridging the entire warm dense matter regime. Pair-correlation functions and the density of electronic states reveal an evolving plasma structure. The degree of ionization is affected by both temperature and density. Finally, shock Hugoniot curves show an increase in compression as the first and second shells are ionized. Funding provided by the DOE (DE-SC0010517). Computational resources provided by the NCAR/CISL, NERSC, and NASA.

  5. Path-integral Monte Carlo simulations for electronic dynamics on molecular chains. II. Transport across impurities

    NASA Astrophysics Data System (ADS)

    Mühlbacher, Lothar; Ankerhold, Joachim

    2005-05-01

    Electron transfer (ET) across molecular chains including an impurity is studied based on a recently improved real-time path-integral Monte Carlo (PIMC) approach [L. Mühlbacher, J. Ankerhold, and C. Escher, J. Chem. Phys. 121 12696 (2004)]. The reduced electronic dynamics is studied for various bridge lengths and defect site energies. By determining intersite hopping rates from PIMC simulations up to moderate times, the relaxation process in the extreme long-time limit is captured within a sequential transfer model. The total transfer rate is extracted and shown to be enhanced for certain defect site energies. Superexchange turns out to be relevant for extreme gap energies only and then gives rise to different dynamical signatures for high- and low-lying defects. Further, it is revealed that the entire bridge compound approaches a steady state on a much shorter time scale than that related to the total transfer. This allows for a simplified description of ET along donor-bridge-acceptor systems in the long-time range.

  6. Path integral Monte Carlo and density functional molecular dynamics simulations of hot, dense helium

    NASA Astrophysics Data System (ADS)

    Militzer, B.

    2009-04-01

    Two first-principles simulation techniques, path integral Monte Carlo (PIMC) and density functional molecular dynamics (DFT-MD), are applied to study hot, dense helium in the density-temperature range of 0.387-5.35gcm-3 and 500K-1.28×108K . One coherent equation of state is derived by combining DFT-MD data at lower temperatures with PIMC results at higher temperatures. Good agreement between both techniques is found in an intermediate-temperature range. For the highest temperatures, the PIMC results converge to the Debye-Hückel limiting law. In order to derive the entropy, a thermodynamically consistent free-energy fit is used that reproduces the internal energies and pressure derived from the first-principles simulations. The equation of state is presented in the form of a table as well as a fit and is compared with different free-energy models. Pair-correlation functions and the electronic density of states are discussed. Shock Hugoniot curves are compared with recent laser shock-wave experiments.

  7. Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes System.

    SciTech Connect

    VALDEZ, GREG D.

    2012-11-30

    Version: 00 Distribution is restricted to US Government Agencies and Their Contractors Only. The Integrated Tiger Series (ITS) is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. The goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  8. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  9. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    SciTech Connect

    Hulett, David T.; Nosbisch, Michael R.

    2012-07-01

    This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds

  10. Path-Integral Monte Carlo Study on a Droplet of a Dipolar Bose-Einstein Condensate Stabilized by Quantum Fluctuation

    NASA Astrophysics Data System (ADS)

    Saito, Hiroki

    2016-05-01

    Motivated by recent experiments [H. Kadau et al., http://doi.org/10.1038/nature16485, Nature (London) 530, 194 (2016); I. Ferrier-Barbut et al., http://arxiv.org/abs/1601.03318, arXiv:1601.03318] and theoretical prediction (F. Wächtler and L. Santos, http://arxiv.org/abs/1601.04501, arXiv:1601.04501), the ground state of a dysprosium Bose-Einstein condensate with strong dipole-dipole interaction is studied by the path-integral Monte Carlo method. It is shown that quantum fluctuation can stabilize the condensate against dipolar collapse.

  11. Path-integral Monte Carlo method for the local Z2 Berry phase.

    PubMed

    Motoyama, Yuichi; Todo, Synge

    2013-02-01

    We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.

  12. Optimization strategy integrity for watershed agricultural non-point source pollution control based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Gong, Y.; Yu, Y. J.; Zhang, W. Y.

    2016-08-01

    This study has established a set of methodological systems by simulating loads and analyzing optimization strategy integrity for the optimization of watershed non-point source pollution control. First, the source of watershed agricultural non-point source pollution is divided into four aspects, including agricultural land, natural land, livestock breeding, and rural residential land. Secondly, different pollution control measures at the source, midway and ending stages are chosen. Thirdly, the optimization effect of pollution load control in three stages are simulated, based on the Monte Carlo simulation. The method described above is applied to the Ashi River watershed in Heilongjiang Province of China. Case study results indicate that the combined three types of control measures can be implemented only if the government promotes the optimized plan and gradually improves implementation efficiency. This method for the optimization strategy integrity for watershed non-point source pollution control has significant reference value.

  13. Variational path integral molecular dynamics and hybrid Monte Carlo algorithms using a fourth order propagator with applications to molecular systems

    NASA Astrophysics Data System (ADS)

    Kamibayashi, Yuki; Miura, Shinichi

    2016-08-01

    In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.

  14. Computational investigations of low-discrepancy point sets

    NASA Technical Reports Server (NTRS)

    Warnock, T. T.

    1971-01-01

    The quasi-Monte Carlo method of integration offers an attractive solution to the problem of evaluating integrals in a large number of dimensions; however, the associated error bounds are difficult to obtain theoretically. Since these bounds are associated with the L2 discrepancy of the set of points used in the integration. Numerical calculations of the L2 discrepancy for several types of quasi-Monte Carlo formulae are presented.

  15. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    SciTech Connect

    Zillich, Robert E.

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbatively to a 2nd-order simulation.

  16. Direct orientation sampling of diatomic molecules for path integral Monte Carlo calculation of fully quantum virial coefficients

    NASA Astrophysics Data System (ADS)

    Subramanian, Ramachandran; Schultz, Andrew J.; Kofke, David A.

    2017-03-01

    We develop an orientation sampling algorithm for rigid diatomic molecules, which allows direct generation of rings of images used for path-integral calculation of nuclear quantum effects. The algorithm treats the diatomic molecule as two independent atoms as opposed to one (quantum) rigid rotor. Configurations are generated according to a solvable approximate distribution that is corrected via the acceptance decision of the Monte Carlo trial. Unlike alternative methods that treat the systems as a quantum rotor, this atom-based approach is better suited for generalization to multi-atomic (more than two atoms) and flexible molecules. We have applied this algorithm in combination with some of the latest ab initio potentials of rigid H2 to compute fully quantum second virial coefficients, for which we observe excellent agreement with both experimental and simulation data from the literature.

  17. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  18. Monte Carlo ray-tracing simulations of luminescent solar concentrators for building integrated photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin Woei; Corrado, Carley; Osborn, Melissa; Carter, Sue A.

    2013-09-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles, concentrating the captured light onto small photo active areas. This enables greater incorporation of LSCs into building designs as windows, skylights and wall claddings in addition to rooftop installations of current solar panels. Using relatively cheap luminescent dyes and acrylic waveguides to effect light concentration onto lesser photovoltaic (PV) cells, there is potential for this technology to approach grid price parity. We employ a panel design in which the front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. This also allows for flexibility in determining the placement and percentage coverage of PV cells during the design process to balance reabsorption losses against the power output and level of light concentration desired. To aid in design optimization, a Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters with interactions of photons in the panel determined by comparing calculated probabilities with random number generators. LSC panels with multiple dyes or layers can also be simulated. Analysis of the results reveals optimal panel dimensions and PV cell layouts for maximum power output for a given dye concentration, absorbtion/emission spectrum and quantum efficiency.

  19. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    SciTech Connect

    Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.

  20. Quantum partition functions of composite particles in a hydrogen-helium plasma via path integral Monte Carlo

    SciTech Connect

    Wendland, D.; Ballenegger, V.; Alastuey, A.

    2014-11-14

    We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H{sup −}, H{sub 2}{sup +}, and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities.

  1. Quantum partition functions of composite particles in a hydrogen-helium plasma via path integral Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wendland, D.; Ballenegger, V.; Alastuey, A.

    2014-11-01

    We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H-, H_2^+, and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities.

  2. Path-integral Monte Carlo simulations for electronic dynamics on molecular chains. I. Sequential hopping and super exchange

    NASA Astrophysics Data System (ADS)

    Mühlbacher, Lothar; Ankerhold, Joachim; Escher, Charlotte

    2004-12-01

    An improved real-time quantum Monte Carlo procedure is presented and applied to describe the electronic transfer dynamics along molecular chains. The model consists of discrete electronic sites coupled to a thermal environment which is integrated out exactly within the path integral formulation. The approach is numerically exact and its results reduce to known analytical findings (Marcus theory, golden rule) in proper limits. Special attention is paid to the role of superexchange and sequential hopping at lower temperatures in symmetric donor-bridge-acceptor systems. In contrast to previous approximate studies, superexchange turns out to play a significant role only for extremely high-lying bridges where the transfer is basically frozen or for extremely low temperatures where for weaker dissipation a description in terms of rate constants is no longer feasible. For bridges with increasing length an algebraic decrease of the yield is found for short as well as for long bridges. The approach can be extended to electronic systems with more complicated topologies including impurities and in presence of external time-dependent forces.

  3. Characterization and Monte Carlo simulation of single ion Geiger mode avalanche diodes integrated with a quantum dot nanostructure

    NASA Astrophysics Data System (ADS)

    Sharma, Peter; Abraham, J. B. S.; Ten Eyck, G.; Childs, K. D.; Bielejec, E.; Carroll, M. S.

    Detection of single ion implantation within a nanostructure is necessary for the high yield fabrication of implanted donor-based quantum computing architectures. Single ion Geiger mode avalanche (SIGMA) diodes with a laterally integrated nanostructure capable of forming a quantum dot were fabricated and characterized using photon pulses. The detection efficiency of this design was measured as a function of wavelength, lateral position, and for varying delay times between the photon pulse and the overbias detection window. Monte Carlo simulations based only on the random diffusion of photo-generated carriers and the geometrical placement of the avalanche region agrees qualitatively with device characterization. Based on these results, SIGMA detection efficiency appears to be determined solely by the diffusion of photo-generated electron-hole pairs into a buried avalanche region. Device performance is then highly dependent on the uniformity of the underlying silicon substrate and the proximity of photo-generated carriers to the silicon-silicon dioxide interface, which are the most important limiting factors for reaching the single ion detection limit with SIGMA detectors. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  4. Monte Carlo Simulations of Luminescent Solar Concentrators with Front-Facing Photovoltaic Cells for Building Integrated Photovoltaics

    NASA Astrophysics Data System (ADS)

    Leow, Shin; Corrado, Carley; Osborn, Melissa; Carter, Sue

    2013-03-01

    Luminescent solar concentrators (LSCs) have the ability to receive light from a wide range of angles and concentrate the captured light on to small photo active areas. This enables LSCs to be integrated more extensively into buildings as windows and wall claddings on top of roof installations. LSCs with front facing PV cells collect both direct and concentrated light ensuring a gain factor greater than one. It also allows for flexibility in determining the placement and percentage coverage of PV cells when designing panels to balance reabsorption losses, power output and the level of concentration desired. A Monte-Carlo ray tracing program was developed to study the transport of photons and loss mechanisms in LSC panels and aid in design optimization. The program imports measured absorption/emission spectra and transmission coefficients as simulation parameters. Interactions of photons with the LSC panel are determined by comparing calculated probabilities with random number generators. Simulation results reveal optimal panel dimensions and PV cell layouts to achieve maximum power output.

  5. Path Integral Monte Carlo Study Confirms a Highly Ordered Snowball in 4He Nanodroplets Doped with an Ar+ Ion

    NASA Astrophysics Data System (ADS)

    Tramonto, F.; Salvestrini, P.; Nava, M.; Galli, D. E.

    2015-07-01

    By means of the Path Integral Monte Carlo method, we have performed a detailed microscopic study of 4He nanodroplets doped with an argon ion, Ar, at K. We have computed density profiles, energies, dissociation energies, and characterized the local order around the ion for nanodroplets with a number of 4He atoms ranging from 10 to 64 and also 128. We have found the formation of a stable solid structure around the ion, a "snowball", consisting of three concentric shells in which the 4He atoms are placed at the vertices of platonic solids: the first inner shell is an icosahedron (12 atoms); the second one is a dodecahedron with 20 atoms placed on the faces of the icosahedron of the first shell; the third shell is again an icosahedron composed of 12 atoms placed on the faces of the dodecahedron of the second shell. The "magic numbers" implied by this structure, 12, 32, and 44 helium atoms, have been observed in a recent experimental study (Bartl et al., J Phys Chem A 118:8050, 2014) of these complexes; the dissociation energy curve computed in the present work shows jumps in correspondence with those found in the nanodroplets abundance distribution measured in that experiment, strengthening the agreement between theory and experiment. The same structures were predicted in Galli et al. (J Phys Chem A 115:7300, 2011) in a study regarding Na+@4He when ; a comparison between Ar+@4He and Na+@4He complexes is also presented.

  6. Finite temperature path integral Monte Carlo simulations of structural and dynamical properties of ArN-CO2 clusters

    NASA Astrophysics Data System (ADS)

    Wang, Lecheng; Xie, Daiqian

    2012-08-01

    We report finite temperature quantum mechanical simulations of structural and dynamical properties of ArN-CO2 clusters using a path integral Monte Carlo algorithm. The simulations are based on a newly developed analytical Ar-CO2 interaction potential obtained by fitting ab initio results to an anisotropic two-dimensional Morse/Long-range function. The calculated distributions of argon atoms around the CO2 molecule in ArN-CO2 clusters with different sizes are consistent to the previous studies of the configurations of the clusters. A first-order perturbation theory is used to quantitatively predict the CO2 vibrational frequency shift in different clusters. The first-solvation shell is completed at N = 17. Interestingly, our simulations for larger ArN-CO2 clusters showed several different structures of the argon shell around the doped CO2 molecule. The observed two distinct peaks (2338.8 and 2344.5 cm-1) in the υ3 band of CO2 may be due to the different arrangements of argon atoms around the dopant molecule.

  7. Path integral Monte Carlo simulation of global and local superfluidity in liquid 4He reservoirs separated by nanoscale apertures

    NASA Astrophysics Data System (ADS)

    Volkoff, Tyler; Kwon, Yongkyung; Whaley, K. Birgitta

    2016-10-01

    We present a path integral Monte Carlo study of the global superfluid fraction and local superfluid density in cylindrically symmetric reservoirs of liquid 4He separated by nanoaperture arrays. The superfluid response to both translations along the axis of symmetry (longitudinal response) and rotations about the cylinder axis (transverse response) are computed, together with radial and axial density distributions that reveal the microscopic inhomogeneity arising from the combined effects of the confining external potential and the 4He-4He interatomic potentials. We make a microscopic determination of the length scale of decay of superfluidity at the radial boundaries of the system by analyzing the local superfluid density distribution to extract a displacement length that quantifies the superfluid mass displacement away from the boundary. We find that the longitudinal superfluid response is reduced in reservoirs separated by a septum containing sufficiently small apertures compared to a cylinder with no intervening aperture array, for all temperatures below Tλ. For a single aperture in the septum, a significant drop in the longitudinal superfluid response is seen when the aperture diameter is made smaller than twice the empirical temperature-dependent 4He healing length, consistent with the formation of a weak link between the reservoirs. Increasing the diameter of a single aperture or the number of apertures in the array results in an increase of the superfluid density toward the expected bulk value.

  8. Comparative Monte Carlo study on the performance of integration- and list-mode detector configurations for carbon ion computed tomography

    NASA Astrophysics Data System (ADS)

    Meyer, Sebastian; Gianoli, Chiara; Magallanes, Lorena; Kopp, Benedikt; Tessonnier, Thomas; Landry, Guillaume; Dedes, George; Voss, Bernd; Parodi, Katia

    2017-02-01

    Ion beam therapy offers the possibility of a highly conformal tumor-dose distribution; however, this technique is extremely sensitive to inaccuracies in the treatment procedures. Ambiguities in the conversion of Hounsfield units of the treatment planning x-ray CT to relative stopping power (RSP) can cause uncertainties in the estimated ion range of up to several millimeters. Ion CT (iCT) represents a favorable solution allowing to directly assess the RSP. In this simulation study we investigate the performance of the integration-mode configuration for carbon iCT, in comparison with a single-particle approach under the same set-up. The experimental detector consists of a stack of 61 air-filled parallel-plate ionization chambers, interleaved with 3 mm thick PMMA absorbers. By means of Monte Carlo simulations, this design was applied to acquire iCTs of phantoms of tissue-equivalent materials. An optimization of the acquisition parameters was performed to reduce the dose exposure, and the implications of a reduced absorber thickness were assessed. In order to overcome limitations of integration-mode detection in the presence of lateral tissue heterogeneities a dedicated post-processing method using a linear decomposition of the detector signal was developed and its performance was compared to the list-mode acquisition. For the current set-up, the phantom dose could be reduced to below 30 mGy with only minor image quality degradation. By using the decomposition method a correct identification of the components and a RSP accuracy improvement of around 2.0% was obtained. The comparison of integration- and list-mode indicated a slightly better image quality of the latter, with an average median RSP error below 1.8% and 1.0%, respectively. With a decreased absorber thickness a reduced RSP error was observed. Overall, these findings support the potential of iCT for low dose RSP estimation, showing that integration-mode detectors with dedicated post-processing strategies

  9. Comparative Monte Carlo study on the performance of integration- and list-mode detector configurations for carbon ion computed tomography.

    PubMed

    Meyer, Sebastian; Gianoli, Chiara; Magallanes, Lorena; Kopp, Benedikt; Tessonnier, Thomas; Landry, Guillaume; Dedes, George; Voss, Bernd; Parodi, Katia

    2017-02-07

    Ion beam therapy offers the possibility of a highly conformal tumor-dose distribution; however, this technique is extremely sensitive to inaccuracies in the treatment procedures. Ambiguities in the conversion of Hounsfield units of the treatment planning x-ray CT to relative stopping power (RSP) can cause uncertainties in the estimated ion range of up to several millimeters. Ion CT (iCT) represents a favorable solution allowing to directly assess the RSP. In this simulation study we investigate the performance of the integration-mode configuration for carbon iCT, in comparison with a single-particle approach under the same set-up. The experimental detector consists of a stack of 61 air-filled parallel-plate ionization chambers, interleaved with 3 mm thick PMMA absorbers. By means of Monte Carlo simulations, this design was applied to acquire iCTs of phantoms of tissue-equivalent materials. An optimization of the acquisition parameters was performed to reduce the dose exposure, and the implications of a reduced absorber thickness were assessed. In order to overcome limitations of integration-mode detection in the presence of lateral tissue heterogeneities a dedicated post-processing method using a linear decomposition of the detector signal was developed and its performance was compared to the list-mode acquisition. For the current set-up, the phantom dose could be reduced to below 30 mGy with only minor image quality degradation. By using the decomposition method a correct identification of the components and a RSP accuracy improvement of around 2.0% was obtained. The comparison of integration- and list-mode indicated a slightly better image quality of the latter, with an average median RSP error below 1.8% and 1.0%, respectively. With a decreased absorber thickness a reduced RSP error was observed. Overall, these findings support the potential of iCT for low dose RSP estimation, showing that integration-mode detectors with dedicated post-processing strategies

  10. Color path-integral Monte-Carlo simulations of quark-gluon plasma: Thermodynamic and transport properties

    NASA Astrophysics Data System (ADS)

    Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.

    2013-03-01

    Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4π≲η/S<2.5/4π, in the temperature range from 170 to 440 MeV.

  11. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.

    2015-11-01

    Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.

  12. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  13. Identification of Critical Molecular Components in a Multiscale Cancer Model Based on the Integration of Monte Carlo, Resampling, and ANOVA.

    PubMed

    Wang, Zhihui; Bordas, Veronika; Deisboeck, Thomas S

    2011-01-01

    To date, parameters defining biological properties in multiscale disease models are commonly obtained from a variety of sources. It is thus important to examine the influence of parameter perturbations on system behavior, rather than to limit the model to a specific set of parameters. Such sensitivity analysis can be used to investigate how changes in input parameters affect model outputs. However, multiscale cancer models require special attention because they generally take longer to run than does a series of signaling pathway analysis tasks. In this article, we propose a global sensitivity analysis method based on the integration of Monte Carlo, resampling, and analysis of variance. This method provides solutions to (1) how to render the large number of parameter variation combinations computationally manageable, and (2) how to effectively quantify the sampling distribution of the sensitivity index to address the inherent computational intensity issue. We exemplify the feasibility of this method using a two-dimensional molecular-microscopic agent-based model previously developed for simulating non-small cell lung cancer; in this model, an epidermal growth factor (EGF)-induced, EGF receptor-mediated signaling pathway was implemented at the molecular level. Here, the cross-scale effects of molecular parameters on two tumor growth evaluation measures, i.e., tumor volume and expansion rate, at the microscopic level are assessed. Analysis finds that ERK, a downstream molecule of the EGF receptor signaling pathway, has the most important impact on regulating both measures. The potential to apply this method to therapeutic target discovery is discussed.

  14. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  15. Effect of surface corrugation on low temperature phases of adsorbed (p-H2)7: A quantum path integral Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Cruz, Anthony; López, Gustavo E.

    2014-04-01

    By using path integral Monte Carlo simulations coupled to Replica Exchange algorithms, various phases of (p-H2)7 physically adsorbed on a model graphite surface were identified at low temperatures. At T=0.5 K, the expected superfluid phase was observed for flat and slightly corrugated surfaces. At intermediate and high corrugations, a "supersolid" phase in C7/16 registry and a solid phase in C1/3 registry were observed, respectively. At higher temperatures, the superfluid is converted to a fluid and the "supersolid" to a solid.

  16. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  17. A functional–structural kiwifruit vine model integrating architecture, carbon dynamics and effects of the environment

    PubMed Central

    Cieslak, Mikolaj; Seleznyova, Alla N.; Hanan, Jim

    2011-01-01

    Background and Aims Functional–structural modelling can be used to increase our understanding of how different aspects of plant structure and function interact, identify knowledge gaps and guide priorities for future experimentation. By integrating existing knowledge of the different aspects of the kiwifruit (Actinidia deliciosa) vine's architecture and physiology, our aim is to develop conceptual and mathematical hypotheses on several of the vine's features: (a) plasticity of the vine's architecture; (b) effects of organ position within the canopy on its size; (c) effects of environment and horticultural management on shoot growth, light distribution and organ size; and (d) role of carbon reserves in early shoot growth. Methods Using the L-system modelling platform, a functional–structural plant model of a kiwifruit vine was created that integrates architectural development, mechanistic modelling of carbon transport and allocation, and environmental and management effects on vine and fruit growth. The branching pattern was captured at the individual shoot level by modelling axillary shoot development using a discrete-time Markov chain. An existing carbon transport resistance model was extended to account for several source/sink components of individual plant elements. A quasi-Monte Carlo path-tracing algorithm was used to estimate the absorbed irradiance of each leaf. Key Results Several simulations were performed to illustrate the model's potential to reproduce the major features of the vine's behaviour. The model simulated vine growth responses that were qualitatively similar to those observed in experiments, including the plastic response of shoot growth to local carbon supply, the branching patterns of two Actinidia species, the effect of carbon limitation and topological distance on fruit size and the complex behaviour of sink competition for carbon. Conclusions The model is able to reproduce differences in vine and fruit growth arising from various

  18. Design and testing of a simulation framework for dosimetric motion studies integrating an anthropomorphic computational phantom into four-dimensional Monte Carlo.

    PubMed

    Riboldi, M; Chen, G T Y; Baroni, G; Paganetti, H; Seco, J

    2008-12-01

    We have designed a simulation framework for motion studies in radiation therapy by integrating the anthropomorphic NCAT phantom into a 4D Monte Carlo dose calculation engine based on DPM. Representing an artifact-free environment, the system can be used to identify class solutions as a function of geometric and dosimetric parameters. A pilot dynamic conformal study for three lesions ( approximately 2.0 cm) in the right lung was performed (70 Gy prescription dose). Tumor motion changed as a function of tumor location, according to the anthropomorphic deformable motion model. Conformal plans were simulated with 0 to 2 cm margin for the aperture, with additional 0.5 cm for beam penumbra. The dosimetric effects of intensity modulated radiotherapy (IMRT) vs. conformal treatments were compared in a static case. Results show that the Monte Carlo simulation framework can model tumor tracking in deformable anatomy with high accuracy, providing absolute doses for IMRT and conformal radiation therapy. A target underdosage of up to 3.67 Gy (lower lung) was highlighted in the composite dose distribution mapped at exhale. Such effects depend on tumor location and treatment margin and are affected by lung deformation and ribcage motion. In summary, the complexity in the irradiation of moving targets has been reduced to a controlled simulation environment, where several treatment options can be accurately modeled and quantified The implemented tools will be utilized for extensive motion study in lung/liver irradiation.

  19. Path integral Monte Carlo determination of the fourth-order virial coefficient for unitary two-component Fermi gas with zero-range interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-05-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astro physics. This work determines the fourth-order virial coefficient b4 of such a strongly-interacting Fermi gas using a customized ab inito path integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4, our b4 agrees with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly anti-symmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. We gratefully acknowledge support by the NSF.

  20. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  1. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  2. Integration and evaluation of automated Monte Carlo simulations in the clinical practice of scanned proton and carbon ion beam therapy.

    PubMed

    Bauer, J; Sommerer, F; Mairani, A; Unholtz, D; Farook, R; Handrack, J; Frey, K; Marcelos, T; Tessonnier, T; Ecker, S; Ackermann, B; Ellerbrock, M; Debus, J; Parodi, K

    2014-08-21

    Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in

  3. A Monte Carlo Approach to Modeling the Breakup of the Space Launch System EM-1 Core Stage with an Integrated Blast and Fragment Catalogue

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Hays, M. J.; Blackwood, J. M.; Skinner, T.

    2014-01-01

    The Liquid Propellant Fragment Overpressure Acceleration Model (L-FOAM) is a tool developed by Bangham Engineering Incorporated (BEi) that produces a representative debris cloud from an exploding liquid-propellant launch vehicle. Here it is applied to the Core Stage (CS) of the National Aeronautics and Space Administration (NASA) Space Launch System (SLS launch vehicle). A combination of Probability Density Functions (PDF) based on empirical data from rocket accidents and applicable tests, as well as SLS specific geometry are combined in a MATLAB script to create unique fragment catalogues each time L-FOAM is run-tailored for a Monte Carlo approach for risk analysis. By accelerating the debris catalogue with the BEi blast model for liquid hydrogen / liquid oxygen explosions, the result is a fully integrated code that models the destruction of the CS at a given point in its trajectory and generates hundreds of individual fragment catalogues with initial imparted velocities. The BEi blast model provides the blast size (radius) and strength (overpressure) as probabilities based on empirical data and anchored with analytical work. The coupling of the L-FOAM catalogue with the BEi blast model is validated with a simulation of the Project PYRO S-IV destruct test. When running a Monte Carlo simulation, L-FOAM can accelerate all catalogues with the same blast (mean blast, 2 s blast, etc.), or vary the blast size and strength based on their respective probabilities. L-FOAM then propagates these fragments until impact with the earth. Results from L-FOAM include a description of each fragment (dimensions, weight, ballistic coefficient, type and initial location on the rocket), imparted velocity from the blast, and impact data depending on user desired application. LFOAM application is for both near-field (fragment impact to escaping crew capsule) and far-field (fragment ground impact footprint) safety considerations. The user is thus able to use statistics from a Monte Carlo

  4. Quantum Gibbs ensemble Monte Carlo

    SciTech Connect

    Fantoni, Riccardo; Moroni, Saverio

    2014-09-21

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  5. Thermodynamics of hydrogen adsorption in slit-like carbon nanopores at 77 K. Classical versus path-integral Monte Carlo simulations.

    PubMed

    Kowalczyk, Piotr; Gauden, Piotr A; Terzyk, Artur P; Bhatia, Suresh K

    2007-03-27

    Hydrogen in slit-like carbon nanopores at 77 K represents a quantum fluid in strong confinement. We have used path-integral grand canonical Monte Carlo and classical grand canonical Monte Carlo simulations for the investigation of the "quantumness" of hydrogen at 77 K adsorbed in slit-like carbon nanopores up to 1 MPa. We find that classical simulations overpredict the hydrogen uptake in carbon nanopores due to neglect of the quantum delocalization. Such disagreement of both simulation methods depends on the slit-like carbon pore size. However, the differences between the final uptakes of hydrogen computed from both classical and quantum simulations are not large due to a similar effective size of quantum/classical hydrogen molecules in carbon nanospaces. For both types of molecular simulations, the volumetric density of stored energy in optimal carbon nanopores exceeds 6.4 MJ dm(-3) (i.e., 45 kg m(-3); Department of Energy target for 2010). In contrast to the hydrogen adsorption isotherms, we found a large reduction of isosteric enthalpy of adsorption computed from the quantum Feynman's path-integral simulations in comparison to the classical values at 77 K and pressures up to 1 MPa. Depression of the quantum isosteric enthalpy of adsorption depends on the slit-like carbon pore size. For the narrow pores (pore width H in [0.59-0.7] nm), the reduction of the quantum isosteric enthalpy of adsorption at zero coverage is around 50% in comparison to the classical one. We observed new phenomena called, by us, the quantum confinement-inducing polymer shrinking. In carbon nanospaces, the quantum cyclic polymers shrink, in comparison to its bulk-phase counterpart, due to a strong confinement effect. At considered storage conditions, this complex phenomenon depends on the size of the slit-like carbon nanopore and the density of hydrogen volumetric energy. For the smallest nanopores and a low density of hydrogen volumetric energy, the reduction of the polymer effective size

  6. GATE Monte Carlo simulations for variations of an integrated PET/MR hybrid imaging system based on the Biograph mMR model

    NASA Astrophysics Data System (ADS)

    Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.

    2015-06-01

    A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall

  7. General polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method applied to small atoms, ions, and molecules at finite temperatures

    NASA Astrophysics Data System (ADS)

    Tiihonen, Juha; Kylänpää, Ilkka; Rantala, Tapio T.

    2016-09-01

    The nonlinear optical properties of matter have a broad relevance and many methods have been invented to compute them from first principles. However, the effects of electronic correlation, finite temperature, and breakdown of the Born-Oppenheimer approximation have turned out to be challenging and tedious to model. Here we propose a straightforward approach and derive general field-free polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method. The estimators are applied to small atoms, ions, and molecules with one or two electrons. With the adiabatic, i.e., Born-Oppenheimer, approximation we obtain accurate tensorial ground state polarizabilities, while the nonadiabatic simulation adds in considerable rovibrational effects and thermal coupling. In both cases, the 0 K, or ground-state, limit is in excellent agreement with the literature. Furthermore, we report here the internal dipole moment of PsH molecule, the temperature dependence of the polarizabilities of H-, and the average dipole polarizabilities and the ground-state hyperpolarizabilities of HeH+ and H 3 + .

  8. Optical properties measurement of laser coagulated tissues with double integrating sphere and inverse Monte Carlo technique in the wavelength range from 350 to 2100 nm

    NASA Astrophysics Data System (ADS)

    Honda, Norihiro; Nanjo, Takuya; Ishii, Katsunori; Awazu, Kunio

    2012-03-01

    In laser medicine, the accurate knowledge about the optical properties (absorption coefficient; μa, scattering coefficient; μs, anisotropy factor; g) of laser irradiated tissues is important for the prediction of light propagation in tissues, since the efficacy of laser treatment depends on the photon propagation within the irradiated tissues. Thus, it is likely that the optical properties of tissues at near-ultraviolet, visible and near-infrared wavelengths will be more important due to more biomedical applications of lasers will be developed. For improvement of the laser induced thermotherapy, the optical property change during laser treatment should be considered in the wide wavelength range. For estimation of the optical properties of the biological tissues, the optical properties measurement system with a double integrating sphere setup and an inverse Monte Carlo technique was developed. The optical properties of chicken muscle tissue were measured in the native state and after laser coagulation using the optical properties measurement system in the wavelength range from 350 to 2100 nm. A CO2 laser was used for laser coagulation. After laser coagulation, the reduced scattering coefficient of the tissue increased. And, the optical penetration depth decreased. For improvement of the treatment depth during laser coagulation, a quantitative procedure using the treated tissue optical properties for determination of the irradiation power density following light penetration decrease might be important in clinic.

  9. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  10. Path integral Monte Carlo simulations of H{sub 2} adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    SciTech Connect

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D’Arcy, Jordan H.; Jordan, Meredith J. T.; Crittenden, Deborah L.

    2015-11-21

    Finite temperature quantum and anharmonic effects are studied in H{sub 2}–Li{sup +}-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H{sub 2}. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H{sub 2} molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔU{sub ads}, and enthalpy, ΔH{sub ads}, for H{sub 2} adsorption onto Li{sup +}-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H{sub 2}–Li{sup +}-benzene are the “helicopter” and “ferris wheel” H{sub 2} rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔU{sub ads} and ΔH{sub ads} are −13.3 ± 0.1 and −14.5 ± 0.1 kJ mol{sup −1}, respectively.

  11. Monte Carlo Example Programs

    SciTech Connect

    Kalos, M.

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  12. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.

  13. Determination of Component Contents of Blend Oil Based on Characteristics Peak Value Integration.

    PubMed

    Xu, Jing; Hou, Pei-guo; Wang, Yu-tian; Pan, Zhao

    2016-01-01

    Edible blend oil market is confused at present. It has some problems such as confusing concepts, randomly named, shoddy and especially the fuzzy standard of compositions and ratios in blend oil. The national standard fails to come on time after eight years. The basic reason is the lack of qualitative and quantitative detection of vegetable oils in blend oil. Edible blend oil is mixed by different vegetable oils according to a certain proportion. Its nutrition is rich. Blend oil is eaten frequently in daily life. Different vegetable oil contains a certain components. The mixed vegetable oil can make full use of their nutrients and make the nutrients more balanced in blend oil. It is conducive to people's health. It is an effectively way to monitor blend oil market by the accurate determination of single vegetable oil content in blend oil. The types of blend oil are known, so we only need for accurate determination of its content. Three dimensional fluorescence spectra are used for the contents in blend oil. A new method of data processing is proposed with calculation of characteristics peak value integration in chosen characteristic area based on Quasi-Monte Carlo method, combined with Neural network method to solve nonlinear equations to obtain single vegetable oil content in blend oil. Peanut oil, soybean oil and sunflower oil are used as research object to reconcile into edible blend oil, with single oil regarded whole, not considered each oil's components. Recovery rates of 10 configurations of edible harmonic oil is measured to verify the validity of the method of characteristics peak value integration. An effective method is provided to detect components content of complex mixture in high sensitivity. Accuracy of recovery rats is increased, compared the common method of solution of linear equations used to detect components content of mixture. It can be used in the testing of kinds and content of edible vegetable oil in blend oil for the food quality detection

  14. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  15. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) in the recent CMB data

    SciTech Connect

    Kim, Jaiseung

    2011-04-01

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.

  16. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    SciTech Connect

    Densmore, Jeffrey D; Kelly, Thompson G; Urbatish, Todd J

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  17. Direct Comparisons among Fast Off-Lattice Monte Carlo Simulations, Integral Equation Theories, and Gaussian Fluctuation Theory for Disordered Symmetric Diblock Copolymers

    NASA Astrophysics Data System (ADS)

    Yang, Delian; Zong, Jing; Wang, Qiang

    2012-02-01

    Based on the same model system of symmetric diblock copolymers as discrete Gaussian chains with soft, finite-range repulsions as commonly used in dissipative-particle dynamics simulations, we directly compare, without any parameter-fitting, the thermodynamic and structural properties of the disordered phase obtained from fast off-lattice Monte Carlo (FOMC) simulations^1, reference interaction site model (RISM) and polymer reference interaction site model (PRISM) theories, and Gaussian fluctuation theory. The disordered phase ranges from homopolymer melts (i.e., where the Flory-Huggins parameter χ=0) all the way to the order-disorder transition point determined in FOMC simulations, and the compared quantities include the internal energy, entropy, Helmholtz free energy, excess pressure, constant-volume heat capacity, chain/block dimensions, and various structure factors and correlation functions in the system. Our comparisons unambiguously and quantitatively reveal the consequences of various theoretical approximations and the validity of these theories in describing the fluctuations/correlations in disordered diblock copolymers. [1] Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).

  18. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    SciTech Connect

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  19. A virtual photon source model of an Elekta linear accelerator with integrated mini MLC for Monte Carlo based IMRT dose calculation.

    PubMed

    Sikora, M; Dohm, O; Alber, M

    2007-08-07

    A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification

  20. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  1. The Monte Carlo Method of Evaluating Integrals

    DTIC Science & Technology

    1975-02-01

    order to give the reader some idea of what is involved, as well as some guidance to the literature, we have included a brief appendix on this subject...as Cartesian coordinates in an N- dlitensional hyperspace . Any point in this hyperspace specifies through the values of its coordinates a...numbered in order from right to left. Let Q be defined as the set of all points (x-.x«,... ,0 in the N-dimensional configuration space which cto satisfy

  2. Fundamentals of Monte Carlo

    SciTech Connect

    Wollaber, Allan Benton

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  3. Monte Carlo eikonal scattering

    NASA Astrophysics Data System (ADS)

    Gibbs, W. R.; Dedonder, J. P.

    2012-08-01

    Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.

  4. Integration

    ERIC Educational Resources Information Center

    Kalyn, Brenda

    2006-01-01

    Integrated learning is an exciting adventure for both teachers and students. It is not uncommon to observe the integration of academic subjects such as math, science, and language arts. However, educators need to recognize that movement experiences in physical education also can be linked to academic curricula and, may even lead the…

  5. Improved geometry representations for Monte Carlo radiation transport.

    SciTech Connect

    Martin, Matthew Ryan

    2004-08-01

    ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

  6. Monte Carlo fluorescence microtomography

    NASA Astrophysics Data System (ADS)

    Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

    2011-07-01

    Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

  7. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  8. MCMini: Monte Carlo on GPGPU

    SciTech Connect

    Marcus, Ryan C.

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  9. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551

  10. Wormhole Hamiltonian Monte Carlo.

    PubMed

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2014-07-31

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  11. Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.

    PubMed

    Miller, G; Martz, H F; Little, T T; Guilmette, R

    2002-01-01

    A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time.

  12. Isotropic Monte Carlo Grain Growth

    SciTech Connect

    Mason, J.

    2013-04-25

    IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.

  13. Conversation with Juan Carlos Negrete.

    PubMed

    Negrete, Juan Carlos

    2013-08-01

    Juan Carlos Negrete is Emeritus Professor of Psychiatry, McGill University; Founding Director, Addictions Unit, Montreal General Hospital; former President, Canadian Society of Addiction Medicine; and former WHO/PAHO Consultant on Alcoholism, Drug Addiction and Mental Health.

  14. Innovation Lecture Series - Carlos Dominguez

    NASA Video Gallery

    Carlos Dominguez is a Senior Vice President at Cisco Systems and a technology evangelist, speaking to and motivating audiences worldwide about how technology is changing how we communicate, collabo...

  15. Markov Chain Monte Carlo from Lagrangian Dynamics

    PubMed Central

    Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark

    2014-01-01

    Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515

  16. SPQR: a Monte Carlo reactor kinetics code. [LMFBR

    SciTech Connect

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.

  17. Proton Upset Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  18. Multilevel sequential Monte Carlo samplers

    SciTech Connect

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.

  19. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  20. Suitable Candidates for Monte Carlo Solutions.

    ERIC Educational Resources Information Center

    Lewis, Jerome L.

    1998-01-01

    Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)

  1. Applications of Monte Carlo Methods in Calculus.

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  2. Monte Carlo Simulation of Plumes Spectral Emission

    DTIC Science & Technology

    2005-06-07

    Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer

  3. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  4. Monte Carlo Study of Real Time Dynamics on the Lattice

    NASA Astrophysics Data System (ADS)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  5. Monte Carlo Methods and Applications for the Nuclear Shell Model

    SciTech Connect

    Dean, D.J.; White, J.A.

    1998-08-10

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  6. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    NASA Astrophysics Data System (ADS)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  7. Monte Carlo methods in ICF

    SciTech Connect

    Zimmerman, G.B.

    1997-06-24

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  8. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  9. Effective discrepancy and numerical experiments

    NASA Astrophysics Data System (ADS)

    Varet, Suzanne; Lefebvre, Sidonie; Durand, Gérard; Roblin, Antoine; Cohen, Serge

    2012-12-01

    Many problems require the computation of a high dimensional integral, typically with a few tens of input factors, with a low number of integrand evaluations. To avoid the curse of dimensionality, we reduce the dimension before applying the Quasi-Monte Carlo method. We will show how to reduce the dimension by computing approximate Sobol indices of the variables with a two-levels fractional factorial design. Then, we will use the Sobol indices to define the effective discrepancy, which turns out to be correlated with the QMC error and thus enables one to choose a good sequence for the integral estimation.

  10. On a full Monte Carlo approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Sellier, J. M.; Dimov, I.

    2016-12-01

    The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.

  11. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  12. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  13. Semistochastic Projector Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.

    2012-12-01

    We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    SciTech Connect

    Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  15. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  16. Single scatter electron Monte Carlo

    SciTech Connect

    Svatos, M.M.

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  17. A detection method of vegetable oils in edible blended oil based on three-dimensional fluorescence spectroscopy technique.

    PubMed

    Xu, Jing; Liu, Xiao-Fei; Wang, Yu-Tian

    2016-12-01

    Edible blended vegetable oils are made from two or more refined oils. Blended oils can provide a wider range of essential fatty acids than single vegetable oils, which helps support good nutrition. Nutritional components in blended oils are related to the type and content of vegetable oils used, and a new, more accurate, method is proposed to identify and quantify the vegetable oils present using cluster analysis and a Quasi-Monte Carlo integral. Three-dimensional fluorescence spectra were obtained at 250-400nm (excitation) and 260-750nm (emission). Mixtures of sunflower, soybean and peanut oils were used as typical examples to validate the effectiveness of the method.

  18. Challenges of Monte Carlo Transport

    SciTech Connect

    Long, Alex Roberts

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  19. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  20. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and

  1. Multidimensional master equation and its Monte-Carlo simulation.

    PubMed

    Pang, Juan; Bai, Zhan-Wu; Bao, Jing-Dong

    2013-02-28

    We derive an integral form of multidimensional master equation for a markovian process, in which the transition function is obtained in terms of a set of discrete Langevin equations. The solution of master equation, namely, the probability density function is calculated by using the Monte-Carlo composite sampling method. In comparison with the usual Langevin-trajectory simulation, the present approach decreases effectively coarse-grained error. We apply the master equation to investigate time-dependent barrier escape rate of a particle from a two-dimensional metastable potential and show the advantage of this approach in the calculations of quantities that depend on the probability density function.

  2. Parallelized quantum Monte Carlo algorithm with nonlocal worm updates.

    PubMed

    Masaki-Kato, Akiko; Suzuki, Takafumi; Harada, Kenji; Todo, Synge; Kawashima, Naoki

    2014-04-11

    Based on the worm algorithm in the path-integral representation, we propose a general quantum Monte Carlo algorithm suitable for parallelizing on a distributed-memory computer by domain decomposition. Of particular importance is its application to large lattice systems of bosons and spins. A large number of worms are introduced and its population is controlled by a fictitious transverse field. For a benchmark, we study the size dependence of the Bose-condensation order parameter of the hard-core Bose-Hubbard model with L×L×βt=10240×10240×16, using 3200 computing cores, which shows good parallelization efficiency.

  3. "Full Model" Nuclear Data and Covariance Evaluation Process Using TALYS, Total Monte Carlo and Backward-forward Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bauge, E.

    2015-01-01

    The "Full model" evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the "Full model" evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.

  4. Monte Carlo Shower Counter Studies

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1991-01-01

    Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.

  5. Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications

    SciTech Connect

    Rising, Michael Evan

    2015-11-03

    These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).

  6. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  7. Quantum Monte Carlo with directed loops.

    PubMed

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  8. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  9. Monte Carlo docking with ubiquitin.

    PubMed Central

    Cummings, M. D.; Hart, T. N.; Read, R. J.

    1995-01-01

    The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as

  10. Independent pixel and Monte Carlo estimates of stratocumulus albedo

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Ridgway, William; Wiscombe, Warren J.; Gollmer, Steven; HARSHVARDHAN

    1994-01-01

    Monte Carlo radiative transfer methods are employed here to estimate the plane-parallel albedo bias for marine stratocumulus clouds. This is the bias in estimates of the mesoscale-average albedo, which arises from the assumption that cloud liquid water is uniformly distributed. The authors compare such estimates with those based on a more realistic distribution generated from a fractal model of marine stratocumulus clouds belonging to the class of 'bounded cascade' models. In this model the cloud top and base are fixed, so that all variations in cloud shape are ignored. The model generates random variations in liquid water along a single horizontal direction, forming fractal cloud streets while conserving the total liquid water in the cloud field. The model reproduces the mean, variance, and skewness of the vertically integrated cloud liquid water, as well as its observed wavenumber spectrum, which is approximately a power law. The Monte Carlo method keeps track of the three-dimensional paths solar photons take through the cloud field, using a vectorized implementation of a direct technique. The simplifications in the cloud field studied here allow the computations to be accelerated. The Monte Carlo results are compared to those of the independent pixel approximation, which neglects net horizontal photon transport. Differences between the Monte Carlo and independent pixel estimates of the mesoscale-average albedo are on the order of 1% for conservative scattering, while the plane-parallel bias itself is an order of magnitude larger. As cloud absorption increases, the independent pixel approximation agrees even more closely with the Monte Carlo estimates. This result holds for a wide range of sun angles and aspect ratios. Thus, horizontal photon transport can be safely neglected in estimates of the area-average flux for such cloud models. This result relies on the rapid falloff of the wavenumber spectrum of stratocumulus, which ensures that the smaller

  11. Electronic structure quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bajdich, Michal; Mitas, Lubos

    2009-04-01

    Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with

  12. Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media

    NASA Astrophysics Data System (ADS)

    Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.

    2013-04-01

    Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.

  13. Anisotropic seismic inversion using a multigrid Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.

    2010-10-01

    We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (ɛ, δ). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (ɛ, δ) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ɛ parameter is particularly robust compared to that of slowness and the Thomsen δ parameter, even in the face of complex subsurface anomalies. The Thomsen ɛ and δ parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ɛ and δ in the MGMC scheme, inverted images of phase velocity reflect the integrated

  14. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  15. Self-learning Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    2017-01-01

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  16. Adiabatic optimization versus diffusion Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  17. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  18. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    PubMed

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  19. Monte Carlo inversion of seismic data

    NASA Technical Reports Server (NTRS)

    Wiggins, R. A.

    1972-01-01

    The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.

  20. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  1. Interaction picture density matrix quantum Monte Carlo

    SciTech Connect

    Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.

    2015-07-28

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.

  2. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  3. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  4. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  5. Monte Carlo simulation of neutron scattering instruments

    SciTech Connect

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  6. Quantum Monte Carlo finite temperature electronic structure of quantum dots

    NASA Astrophysics Data System (ADS)

    Leino, Markku; Rantala, Tapio T.

    2002-08-01

    Quantum Monte Carlo methods allow a straightforward procedure for evaluation of electronic structures with a proper treatment of electronic correlations. This can be done even at finite temperatures [1]. We test the Path Integral Monte Carlo (PIMC) simulation method [2] for one and two electrons in one and three dimensional harmonic oscillator potentials and apply it in evaluation of finite temperature effects of single and coupled quantum dots. Our simulations show the correct finite temperature excited state populations including degeneracy in cases of one and three dimensional harmonic oscillators. The simulated one and two electron distributions of a single and coupled quantum dots are compared to those from experiments and other theoretical (0 K) methods [3]. Distributions are shown to agree and the finite temperature effects are discussed. Computational capacity is found to become the limiting factor in simulations with increasing accuracy. Other essential aspects of PIMC and its capability in this type of calculations are also discussed. [1] R.P. Feynman: Statistical Mechanics, Addison Wesley, 1972. [2] D.M. Ceperley, Rev.Mod.Phys. 67, 279 (1995). [3] M. Pi, A. Emperador and M. Barranco, Phys.Rev.B 63, 115316 (2001).

  7. Infinite variance in fermion quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  8. Energy Modulated Photon Radiotherapy: A Monte Carlo Feasibility Study

    PubMed Central

    Zhang, Ying; Feng, Yuanming; Ming, Xin

    2016-01-01

    A novel treatment modality termed energy modulated photon radiotherapy (EMXRT) was investigated. The first step of EMXRT was to determine beam energy for each gantry angle/anatomy configuration from a pool of photon energy beams (2 to 10 MV) with a newly developed energy selector. An inverse planning system using gradient search algorithm was then employed to optimize photon beam intensity of various beam energies based on presimulated Monte Carlo pencil beam dose distributions in patient anatomy. Finally, 3D dose distributions in six patients of different tumor sites were simulated with Monte Carlo method and compared between EMXRT plans and clinical IMRT plans. Compared to current IMRT technique, the proposed EMXRT method could offer a better paradigm for the radiotherapy of lung cancers and pediatric brain tumors in terms of normal tissue sparing and integral dose. For prostate, head and neck, spine, and thyroid lesions, the EMXRT plans were generally comparable to the IMRT plans. Our feasibility study indicated that lower energy (<6 MV) photon beams could be considered in modern radiotherapy treatment planning to achieve a more personalized care for individual patient with dosimetric gains. PMID:26977413

  9. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  10. Monte Carlo model for electron degradation in methane gas

    NASA Astrophysics Data System (ADS)

    Bhardwaj, Anil; Mukundan, Vrinda

    2015-06-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model."Yield spectra", which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes. Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies > 50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of ∼ 27 %. Below 10 eV, vibrational excitation dominates. Contribution of emission is around 1.2% at 10 keV. Efficiency of attachment process is ∼ 0.1 % at 8 eV and efficiency falls down to negligibly small values at energies greater than 15 eV. The efficiencies can be used to calculate volume production rate in planetary atmospheres by folding with electron production rate and integrating over energy.

  11. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  12. Monte Carlo Simulation of Counting Experiments.

    ERIC Educational Resources Information Center

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  13. A comparison of Monte Carlo generators

    SciTech Connect

    Golan, Tomasz

    2015-05-15

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.

  14. Monte Carlo studies of uranium calorimetry

    SciTech Connect

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.

  15. Structural Reliability and Monte Carlo Simulation.

    ERIC Educational Resources Information Center

    Laumakis, P. J.; Harlow, G.

    2002-01-01

    Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)

  16. Search and Rescue Monte Carlo Simulation.

    DTIC Science & Technology

    1985-03-01

    confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)

  17. Monte Carlo methods in genetic analysis

    SciTech Connect

    Lin, Shili

    1996-12-31

    Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.

  18. Monte Carlo studies of ARA detector optimization

    NASA Astrophysics Data System (ADS)

    Stockham, Jessica

    2013-04-01

    The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.

  19. Monte carlo calculations of light scattering from clouds.

    PubMed

    Plass, G N; Kattawar, G W

    1968-03-01

    The scattering of visible light by clouds is calculated from an efficient Monte Carlo code which follows the multiple scattered path of the photon. The single scattering function is obtained from the Mie theory by integration over a particle size distribution appropriate for cumulus clouds at 0.7-micro wavelength. The photons are followed through a sufficient number of collisions and reflections from the lower surface (which may have any desired albedo) until they make a negligible contribution to the intensity. Various variance reduction techniques are used to improve the statistics. The cloud albedo and the mean optical path of the transmitted and reflected photons are given as a function of the solar zenith angle, optical thickness, and surface albedo. The numerous small angle scatterings of the photon in the direction of the incident beam are followed accurately and produce a greater penetration into the cloud than is obtained with a more isotropic and less realistic phase function.

  20. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  1. Residual entropy of ice III from Monte Carlo simulation.

    PubMed

    Kolafa, Jiří

    2016-03-28

    We calculated the residual entropy of ice III as a function of the occupation probabilities of hydrogen positions α and β assuming equal energies of all configurations. To do this, a discrete ice model with Bjerrum defect energy penalty and harmonic terms to constrain the occupation probabilities was simulated by the Metropolis Monte Carlo method for a range of temperatures and sizes followed by thermodynamic integration and extrapolation to N = ∞. Similarly as for other ices, the residual entropies are slightly higher than the mean-field (no-loop) approximation. However, the corrections caused by fluctuation of energies of ice samples calculated using molecular models of water are too large for accurate determination of the chemical potential and phase equilibria.

  2. Residual entropy of ices and clathrates from Monte Carlo simulation

    SciTech Connect

    Kolafa, Jiří

    2014-05-28

    We calculated the residual entropy of ices (Ih, Ic, III, V, VI) and clathrates (I, II, H), assuming the same energy of all configurations satisfying the Bernal–Fowler ice rules. The Metropolis Monte Carlo simulations in the range of temperatures from infinity to a size-dependent threshold were followed by the thermodynamic integration. Convergence of the simulation and the finite-size effects were analyzed using the quasichemical approximation and the Debye–Hückel theory applied to the Bjerrum defects. The leading finite-size error terms, ln N/N, 1/N, and for the two-dimensional square ice model also 1/N{sup 3/2}, were used for an extrapolation to the thermodynamic limit. Finally, we discuss the influence of unequal energies of proton configurations.

  3. Acceleration of a Monte Carlo radiation transport code

    SciTech Connect

    Hochstedler, R.D.; Smith, L.M.

    1996-03-01

    Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}

  4. Multistream and Monte Carlo calculations of the sun's aureole

    NASA Technical Reports Server (NTRS)

    Furman, D. R.; Green, A. E. S.; Mo, T.

    1976-01-01

    The scattered UV intensities in the sun's aureole are calculated using both a multi-stream scattering method and a Monte Carlo approach. Angular distributions for both Rayleigh and aerosol scatterings are obtained with a realistic atmospheric model. Moderate- and sharply-forward-peaked phase functions of aerosol scattering, corresponding to realistic analytic size distributions, are incorporated. Results from the two independent calculations are in reasonable agreement for a realistic atmospheric model. The results indicate that the scattered UV intensity in the sun's aureole is about four orders smaller than the transmitted intensity while the scattered intensity for pure Rayleigh scattering is only 1-10 millionths times that of the transmitted sun. Based on these intensity ratios, we estimate that the integrated scattered contributions from the aureole to a well-collimated sun photometer of acceptance aperture 2-3 deg are below about 1% of the direct contribution.

  5. GPM Satellite Sees Heavy Rainfall in Tropical Cyclone Carlos

    NASA Video Gallery

    The GPM core observatory satellite flew above tropical cyclone Carlos on February 7, 2017 at 1056 UTC and measured a few downpours in the bands west of the Carlos' center of circulation dropping ra...

  6. Development of a Space Radiation Monte Carlo Computer Simulation

    NASA Technical Reports Server (NTRS)

    Pinsky, Lawrence S.

    1997-01-01

    The ultimate purpose of this effort is to undertake the development of a computer simulation of the radiation environment encountered in spacecraft which is based upon the Monte Carlo technique. The current plan is to adapt and modify a Monte Carlo calculation code known as FLUKA, which is presently used in high energy and heavy ion physics, to simulate the radiation environment present in spacecraft during missions. The initial effort would be directed towards modeling the MIR and Space Shuttle environments, but the long range goal is to develop a program for the accurate prediction of the radiation environment likely to be encountered on future planned endeavors such as the Space Station, a Lunar Return Mission, or a Mars Mission. The longer the mission, especially those which will not have the shielding protection of the earth's magnetic field, the more critical the radiation threat will be. The ultimate goal of this research is to produce a code that will be useful to mission planners and engineers who need to have detailed projections of radiation exposures at specified locations within the spacecraft and for either specific times during the mission or integrated over the entire mission. In concert with the development of the simulation, it is desired to integrate it with a state-of-the-art interactive 3-D graphics-capable analysis package known as ROOT, to allow easy investigation and visualization of the results. The efforts reported on here include the initial development of the program and the demonstration of the efficacy of the technique through a model simulation of the MIR environment. This information was used to write a proposal to obtain follow-on permanent funding for this project.

  7. Monte Carlo Study of Real Time Dynamics on the Lattice.

    PubMed

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-08-19

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  8. An enhanced Monte Carlo outlier detection method.

    PubMed

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  9. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time.

  10. Monte Carlo simulations on SIMD computer architectures

    SciTech Connect

    Burmester, C.P.; Gronsky, R.; Wille, L.T.

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  11. Status of Monte Carlo at Los Alamos

    SciTech Connect

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner.

  12. Fission Matrix Capability for MCNP Monte Carlo

    NASA Astrophysics Data System (ADS)

    Brown, Forrest; Carney, Sean; Kiedrowski, Brian; Martin, William

    2014-06-01

    We describe recent experience and results from implementing a fission matrix capability into the MCNP Monte Carlo code. The fission matrix can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission neutron source distribution. It can also be used to accelerate the convergence of the power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used in MCNP to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. Past difficulties and limitations of the fission matrix approach are overcome with a new sparse representation of the matrix, permitting much larger and more accurate fission matrix representations. The new fission matrix capabilities provide a significant advance in the state-of-the-art for Monte Carlo criticality calculations.

  13. Quantum Monte Carlo applied to solids

    SciTech Connect

    Shulenburger, Luke; Mattsson, Thomas R.

    2013-12-01

    We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.

  14. Monte Carlo Particle Transport: Algorithm and Performance Overview

    SciTech Connect

    Gentile, N; Procassini, R; Scott, H

    2005-06-02

    Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations.

  15. Applications of Maxent to quantum Monte Carlo

    SciTech Connect

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. ); Jarrell, M. . Dept. of Physics)

    1990-01-01

    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  16. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  17. Recovering intrinsic fluorescence by Monte Carlo modeling.

    PubMed

    Müller, Manfred; Hendriks, Benno H W

    2013-02-01

    We present a novel way to recover intrinsic fluorescence in turbid media based on Monte Carlo generated look-up tables and making use of a diffuse reflectance measurement taken at the same location. The method has been validated on various phantoms with known intrinsic fluorescence and is benchmarked against photon-migration methods. This new method combines more flexibility in the probe design with fast reconstruction and showed similar reconstruction accuracy as found in other reconstruction methods.

  18. Monte Carlo approach to Estrada index

    NASA Astrophysics Data System (ADS)

    Gutman, Ivan; Radenković, Slavko; Graovac, Ante; Plavšić, Dejan

    2007-09-01

    Let G be a graph on n vertices, and let λ1, λ2, …, λn be its eigenvalues. The Estrada index of G is a recently introduced molecular structure descriptor, defined as EE=∑i=1ne. Using a Monte Carlo approach, and treating the graph eigenvalues as random variables, we deduce approximate expressions for EE, in terms of the number of vertices and number of edges, of very high accuracy.

  19. Accelerated Monte Carlo by Embedded Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  20. Exclusive Multiple Emission Cross Sections in the Hybrid Monte Carlo Pre-equilibrium Model and in EMPIRE-3.1

    NASA Astrophysics Data System (ADS)

    Carlson, B. V.; Brito, L.; Mega, D. F.; Capote, R.; Herman, M.; Rego, M. E.

    2014-04-01

    We discuss the general concept of exclusive emission cross sections and spectra and the exclusive spectra of the ENDF library. We briefly review the exclusive hybrid Monte Carlo simulation model and show how its exclusive cross sections can be integrated into the reaction code EMPIRE-3.1. We close by discussing several examples.

  1. A Monte Carlo Metropolis-Hastings algorithm for sampling from distributions with intractable normalizing constants.

    PubMed

    Liang, Faming; Jin, Ick-Hoon

    2013-08-01

    Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.

  2. Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.

    PubMed

    Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J

    2007-10-21

    A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in

  3. Quantum Monte Carlo Algorithms for Diagrammatic Vibrational Structure Calculations

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew; Hirata, So

    2015-06-01

    Convergent hierarchies of theories for calculating many-body vibrational ground and excited-state wave functions, such as Møller-Plesset perturbation theory or coupled cluster theory, tend to rely on matrix-algebraic manipulations of large, high-dimensional arrays of anharmonic force constants, tasks which require large amounts of computer storage space and which are very difficult to implement in a parallel-scalable fashion. On the other hand, existing quantum Monte Carlo (QMC) methods for vibrational wave functions tend to lack robust techniques for obtaining excited-state energies, especially for large systems. By exploiting analytical identities for matrix elements of position operators in a harmonic oscillator basis, we have developed stochastic implementations of the size-extensive vibrational self-consistent field (MC-XVSCF) and size-extensive vibrational Møller-Plesset second-order perturbation (MC-XVMP2) theories which do not require storing the potential energy surface (PES). The programmable equations of MC-XVSCF and MC-XVMP2 take the form of a small number of high-dimensional integrals evaluated using Metropolis Monte Carlo techniques. The associated integrands require independent evaluations of only the value, not the derivatives, of the PES at many points, a task which is trivial to parallelize. However, unlike existing vibrational QMC methods, MC-XVSCF and MC-XVMP2 can calculate anharmonic frequencies directly, rather than as a small difference between two noisy total energies, and do not require user-selected coordinates or nodal surfaces. MC-XVSCF and MC-XVMP2 can also directly sample the PES in a given approximation without analytical or grid-based approximations, enabling us to quantify the errors induced by such approximations.

  4. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  5. STS-97 MS Carlos Noriega suits up for launch

    NASA Technical Reports Server (NTRS)

    2000-01-01

    STS-97 Mission Specialist Carlos Noriega appears relaxed as he dons his launch and entry suit. This is his second Shuttle flight. Mission STS-97 is the sixth construction flight to the International Space Station. It is transporting the P6 Integrated Truss Structure that comprises Solar Array Wing-3 and the Integrated Electronic Assembly, to be installed on the Space Station. The solar arrays are mounted on a '''blanket''' that can be folded like an accordion for delivery. Once in orbit, astronauts will deploy the blankets to their full size. The 11-day mission includes two spacewalks to complete the solar array connections. The Station'''s electrical power system will use eight photovoltaic solar arrays, each 112 feet long by 39 feet wide, to convert sunlight to electricity. Gimbals will be used to rotate the arrays so that they will face the Sun to provide maximum power to the Space Station. Launch is scheduled for Nov. 30 at 10:06 p.m. EST.

  6. Monte Carlo simulations for generic granite repository studies

    SciTech Connect

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  7. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  8. Numerical integration using Wang Landau sampling

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Wüst, T.; Landau, D. P.; Lin, H. Q.

    2007-09-01

    We report a new application of Wang-Landau sampling to numerical integration that is straightforward to implement. It is applicable to a wide variety of integrals without restrictions and is readily generalized to higher-dimensional problems. The feasibility of the method results from a reinterpretation of the density of states in statistical physics to an appropriate measure for numerical integration. The properties of this algorithm as a new kind of Monte Carlo integration scheme are investigated with some simple integrals, and a potential application of the method is illustrated by the evaluation of integrals arising in perturbation theory of quantum many-body systems.

  9. Monte Carlo simulation of intercalated carbon nanotubes.

    PubMed

    Mykhailenko, Oleksiy; Matsui, Denis; Prylutskyy, Yuriy; Le Normand, Francois; Eklund, Peter; Scharff, Peter

    2007-01-01

    Monte Carlo simulations of the single- and double-walled carbon nanotubes (CNT) intercalated with different metals have been carried out. The interrelation between the length of a CNT, the number and type of metal atoms has also been established. This research is aimed at studying intercalated systems based on CNTs and d-metals such as Fe and Co. Factors influencing the stability of these composites have been determined theoretically by the Monte Carlo method with the Tersoff potential. The modeling of CNTs intercalated with metals by the Monte Carlo method has proved that there is a correlation between the length of a CNT and the number of endo-atoms of specific type. Thus, in the case of a metallic CNT (9,0) with length 17 bands (3.60 nm), in contrast to Co atoms, Fe atoms are extruded out of the CNT if the number of atoms in the CNT is not less than eight. Thus, this paper shows that a CNT of a certain size can be intercalated with no more than eight Fe atoms. The systems investigated are stabilized by coordination of 3d-atoms close to the CNT wall with a radius-vector of (0.18-0.20) nm. Another characteristic feature is that, within the temperature range of (400-700) K, small systems exhibit ground-state stabilization which is not characteristic of the higher ones. The behavior of Fe and Co endo-atoms between the walls of a double-walled carbon nanotube (DW CNT) is explained by a dominating van der Waals interaction between the Co atoms themselves, which is not true for the Fe atoms.

  10. Quantum Monte Carlo for vibrating molecules

    SciTech Connect

    Brown, W.R. |

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  11. A Monte Carlo approach to water management

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  12. Status of Monte-Carlo Event Generators

    SciTech Connect

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  13. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  14. MBR Monte Carlo Simulation in PYTHIA8

    NASA Astrophysics Data System (ADS)

    Ciesielski, R.

    We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.

  15. Monte Carlo procedure for protein design

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders; Peterson, Carsten; Potthast, Frank; Sandelin, Erik

    1998-11-01

    A method for sequence optimization in protein models is presented. The approach, which has inherited its basic philosophy from recent work by Deutsch and Kurosky [Phys. Rev. Lett. 76, 323 (1996)] by maximizing conditional probabilities rather than minimizing energy functions, is based upon a different and very efficient multisequence Monte Carlo scheme. By construction, the method ensures that the designed sequences represent good folders thermodynamically. A bootstrap procedure for the sequence space search is devised making very large chains feasible. The algorithm is successfully explored on the two-dimensional HP model [K. F. Lau and K. A. Dill, Macromolecules 32, 3986 (1989)] with chain lengths N=16, 18, and 32.

  16. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  17. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  18. Discovering correlated fermions using quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  19. Exascale Monte Carlo R&D

    SciTech Connect

    Marcus, Ryan C.

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  20. Quantum Monte Carlo calculations for light nuclei

    SciTech Connect

    Wiringa, R.B.

    1998-08-01

    Quantum Monte Carlo calculations of ground and low-lying excited states for nuclei with A {le} 8 are made using a realistic Hamiltonian that fits NN scattering data. Results for more than 30 different (j{sup {prime}}, T) states, plus isobaric analogs, are obtained and the known excitation spectra are reproduced reasonably well. Various density and momentum distributions and electromagnetic form factors and moments have also been computed. These are the first microscopic calculations that directly produce nuclear shell structure from realistic NN interactions.

  1. Introduction to Cluster Monte Carlo Algorithms

    NASA Astrophysics Data System (ADS)

    Luijten, E.

    This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.

  2. Cluster hybrid Monte Carlo simulation algorithms.

    PubMed

    Plascak, J A; Ferrenberg, Alan M; Landau, D P

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  3. Cluster hybrid Monte Carlo simulation algorithms

    NASA Astrophysics Data System (ADS)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  4. Monte Carlo simulation for the transport beamline

    NASA Astrophysics Data System (ADS)

    Romano, F.; Attili, A.; Cirrone, G. A. P.; Carpinelli, M.; Cuttone, G.; Jia, S. B.; Marchetto, F.; Russo, G.; Schillaci, F.; Scuderi, V.; Tramontana, A.; Varisano, A.

    2013-07-01

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  5. Generating moment matching scenarios using optimization techniques

    SciTech Connect

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem that is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.

  6. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  7. State-of-the-art Monte Carlo 1988

    SciTech Connect

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  8. 3. Photographic copy of map. San Carlos Project, Arizona. Irrigation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Photographic copy of map. San Carlos Project, Arizona. Irrigation System. Department of the Interior. United States Indian Service. No date. Circa 1939. (Source: Henderson, Paul. U.S. Indian Irrigation Service. Supplemental Storage Reservoir, Gila River. November 10, 1939, RG 115, San Carlos Project, National Archives, Rocky Mountain Region, Denver, CO.) - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  9. MILAGRO IMPLICIT MONTE CARLO: NEW CAPABILITIES AND RESULTS

    SciTech Connect

    T. URBATSCH; T. EVANS

    2000-12-01

    Milagro is a stand-alone, radiation-only, code that performs nonlinear radiative transfer calculations using the Fleck and Cummings method of Implicit Monte Carlo (IMC). Milagro is an object-oriented, C++ code that utilizes classes in our group's (CCS-4) radiation transport library. Milagro and its underlying classes have been significantly upgraded since 1998, when results from Milagro were first presented. Most notably, the object-oriented design has been revised to allow for optimal stand-alone parallel efficiency and rapid integration of new classes. For example, the better design, coupled with stringent component testing, allowed for immediate integration of the full domain decomposition parallel scheme. (It is a simple philosophy: spend time on the design, and debug early and once.) Milagro's classes are templated on mesh type. Currently, it runs on an orthogonal, structured, not-necessarily-uniform, Cartesian mesh of up to three dimensions, an RZ-Wedge mesh, and soon a tetrahedral mesh. Milagro considers one-frequency, or ''grey,'' radiation with isotropic scattering, user-defined analytic opacities and equation-of-state, and various source types: surface, material, and radiation. Tallies produced by Milagro include energy and momentum deposition. In parallel, Milagro can run on a mesh that is fully replicated on all processors or on a mesh that is fully decomposed in the spatial domain. Milagro is reproducible, regardless of number of processors or parallel topology, and it now exactly conserves energy both globally and locally. Milagro has the capability for EnSight graphics and restarting. Finally, Milagro has been well verified with its use of Design-by-Contract{trademark}, component tests, and regression tests, and with its agreement to results of analytic test problems. By successfully running analytic and benchmark problems, Milagro serves to integrally verify all of its underlying classes, thus paving the way for other service packages based on these

  10. Liquid crystal free energy relaxation by a theoretically informed Monte Carlo method using a finite element quadrature approach.

    PubMed

    Armas-Pérez, Julio C; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-12-28

    A theoretically informed Monte Carlo method is proposed for Monte Carlo simulation of liquid crystals on the basis of theoretical representations in terms of coarse-grained free energy functionals. The free energy functional is described in the framework of the Landau-de Gennes formalism. A piecewise finite element discretization is used to approximate the alignment field, thereby providing an excellent geometrical representation of curved interfaces and accurate integration of the free energy. The method is suitable for situations where the free energy functional includes highly non-linear terms, including chirality or high-order deformation modes. The validity of the method is established by comparing the results of Monte Carlo simulations to traditional Ginzburg-Landau minimizations of the free energy using a finite difference scheme, and its usefulness is demonstrated in the context of simulations of chiral liquid crystal droplets with and without nanoparticle inclusions.

  11. A Monte Carlo Simulation for Understanding Energy Measurements of Beta Particles Detected by the UCNb Experiment

    NASA Astrophysics Data System (ADS)

    Feng, Chi; UCNb Collaboration

    2011-10-01

    It is theorized that contributions to the Fierz interference term from scalar interaction beyond the Standard Model could be detectable in the spectrum of neutron beta-decay. The UCNb experiment run at the Los Alamos Neutron Science Center aims to accurately measure the neutron beta-decay energy spectrum to detect a nonzero interference term. The instrument consists of a cubic ``integrating sphere'' calorimeter attached with up to 4 photomultiplier tubes. The inside of the calorimeter is coated with white paint and a thin UV scintillating layer made of deuterated polystyrene to contain the ultracold neutrons. A Monte Carlo simulation using the Geant4 toolkit is developed in order to provide an accurate method of energy reconstruction. Offline calibration with the Kellogg Radiation Laboratory 140 keV electron gun and conversion electron sources will be used to validate the Monte Carlo simulation to give confidence in the energy reconstruction methods and to better understand systematics in the experiment data.

  12. A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution

    NASA Astrophysics Data System (ADS)

    Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.

    2015-11-01

    In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].

  13. Monte Carlo simulations within avalanche rescue

    NASA Astrophysics Data System (ADS)

    Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg

    2016-04-01

    Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.

  14. Calculating Pi Using the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Williamson, Timothy

    2013-11-01

    During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.

  15. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.

    2014-10-01

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  16. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; Pieper, Steven C.; Schiavilla, Rocco; Schmidt, K. E,; Wiringa, Robert B.

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  17. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, Joseph A.; Gandolfi, Stefano; Pederiva, Francesco; ...

    2014-10-19

    Quantum Monte Carlo methods have proved very valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. We review the nuclear interactions and currents, and describe the continuum Quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-bodymore » interactions. We present a variety of results including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. We also describe low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  18. Geometrical Monte Carlo simulation of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Demet; Yuksel, Heba

    2013-09-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.

  19. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  20. Discrete range clustering using Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  1. Quantum Monte Carlo methods for nuclear physics

    DOE PAGES

    Carlson, J.; Gandolfi, S.; Pederiva, F.; ...

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit,more » and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.« less

  2. CosmoMC: Cosmological MonteCarlo

    NASA Astrophysics Data System (ADS)

    Lewis, Antony; Bridle, Sarah

    2011-06-01

    We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.

  3. Quantum Monte Carlo methods for nuclear physics

    SciTech Connect

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-09-09

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab-initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. Furthermore, a coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  4. Quantum Monte Carlo methods for nuclear physics

    NASA Astrophysics Data System (ADS)

    Carlson, J.; Gandolfi, S.; Pederiva, F.; Pieper, Steven C.; Schiavilla, R.; Schmidt, K. E.; Wiringa, R. B.

    2015-07-01

    Quantum Monte Carlo methods have proved valuable to study the structure and reactions of light nuclei and nucleonic matter starting from realistic nuclear interactions and currents. These ab initio calculations reproduce many low-lying states, moments, and transitions in light nuclei, and simultaneously predict many properties of light nuclei and neutron matter over a rather wide range of energy and momenta. The nuclear interactions and currents are reviewed along with a description of the continuum quantum Monte Carlo methods used in nuclear physics. These methods are similar to those used in condensed matter and electronic structure but naturally include spin-isospin, tensor, spin-orbit, and three-body interactions. A variety of results are presented, including the low-lying spectra of light nuclei, nuclear form factors, and transition matrix elements. Low-energy scattering techniques, studies of the electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter as found in neutron stars are also described. A coherent picture of nuclear structure and dynamics emerges based upon rather simple but realistic interactions and currents.

  5. Quantum Monte Carlo for atoms and molecules

    SciTech Connect

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations, the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.

  6. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    SciTech Connect

    WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  7. Chemical application of diffusion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  8. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGES

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  9. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  10. Four decades of implicit Monte Carlo

    SciTech Connect

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate forms of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.

  11. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  12. Monte Carlo modeling of spatial coherence: free-space diffraction

    PubMed Central

    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.

    2008-01-01

    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335

  13. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    SciTech Connect

    Grimes, Joshua; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  14. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon

  15. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  16. The practice of recent radiative transfer Monte Carlo advances and its contribution to the field of microorganisms cultivation in photobioreactors

    NASA Astrophysics Data System (ADS)

    Dauchet, Jérémi; Blanco, Stéphane; Cornet, Jean-François; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard

    2013-10-01

    The present text illustrates the practice of integral formulation, zero-variance approaches and sensitivity evaluations in the field of radiative transfer Monte Carlo simulation, as well as the practical implementation of the corresponding algorithms, for such realistic systems as photobioreactors involving spectral integration, multiple scattering and complex geometries. We try to argue that even in such non-academic contexts, strong benefits can be expected from the effort of translating the considered Monte Carlo algorithm into a rigorously equivalent integral formulation. Modifying the initial algorithm to simultaneously compute sensitivities is then straightforward (except for domain deformation sensitivities) and the question of enhancing convergence is turned into that of modeling a set of well identified physical quantities.

  17. Monte Carlo-Minimization and Monte Carlo Recursion Approaches to Structure and Free Energy.

    NASA Astrophysics Data System (ADS)

    Li, Zhenqin

    1990-08-01

    Biological systems are intrinsically "complex", involving many degrees of freedom, heterogeneity, and strong interactions among components. For the simplest of biological substances, e.g., biomolecules, which obey the laws of thermodynamics, we may attempt a statistical mechanical investigational approach. Even for these simplest many -body systems, assuming microscopic interactions are completely known, current computational methods in characterizing the overall structure and free energy face the fundamental challenge of an exponential amount of computation, with the rise in the number of degrees of freedom. As an attempt to surmount such problems, two computational procedures, the Monte Carlo-minimization and Monte Carlo recursion methods, have been developed as general approaches to the determination of structure and free energy of a complex thermodynamic system. We describe, in Chapter 2, the Monte Carlo-minimization method, which attempts to simulate natural protein folding processes and to overcome the multiple-minima problem. The Monte Carlo-minimization procedure has been applied to a pentapeptide, Met-enkephalin, leading consistently to the lowest energy structure, which is most likely to be the global minimum structure for Met-enkephalin in the absence of water, given the ECEPP energy parameters. In Chapter 3 of this thesis, we develop a Monte Carlo recursion method to compute the free energy of a given physical system with known interactions, which has been applied to a 32-particle Lennard-Jones fluid. In Chapter 4, we describe an efficient implementation of the recursion procedure, for the computation of the free energy of liquid water, with both MCY and TIP4P potential parameters for water. As a further demonstration of the power of the recursion method for calculating free energy, a general formalism of cluster formation from monatomic vapor is developed in Chapter 5. The Gibbs free energy of constrained clusters can be computed efficiently using the

  18. Markov chain Monte Carlo methods: an introductory example

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  19. Non-analog Monte Carlo estimators for radiation momentum deposition

    SciTech Connect

    Densmore, Jeffery D; Hykes, Joshua M

    2008-01-01

    The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.

  20. Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.

    PubMed

    Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J

    2012-10-01

    Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.

  1. A Monte Carlo simulation approach for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Agili, Hachem; Chokmani, Karem; Oubennaceur, Khalid; Poulin, Jimmy; Marceau, Pascal

    2016-04-01

    Floods are the most frequent natural disaster and the most damaging in Canada. The issue of assessing and managing the risk related to this disaster has become increasingly crucial for both local and national authorities. Brigham, a municipality located in southern Quebec Province, is one of the heavily affected regions by this disaster because of frequent overflows of the Yamaska River reaching two to three times per year. Since Irene Hurricane which hit the region in 2011 causing considerable socio-economic damage, the implementation of mitigation measures has become a major priority for this municipality. To do this, a preliminary study to evaluate the risk to which this region is exposed is essential. Conventionally, approaches only based on the characterization of the hazard (e.g. floodplains extensive, flood depth) are generally adopted to study the risk of flooding. In order to improve the knowledge of this risk, a Monte Carlo simulation approach combining information on the hazard with vulnerability-related aspects of buildings has been developed. This approach integrates three main components namely hydrological modeling through flow-probability functions, hydraulic modeling using flow-submersion height functions and the study of buildings damage based on damage functions adapted to the Quebec habitat. The application of this approach allows estimating the annual average cost of damage caused by floods on buildings. The obtained results will be useful for local authorities to support their decisions on risk management and prevention against this disaster.

  2. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or Bose glass'' insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  3. Quantum Monte Carlo simulations for disordered Bose systems

    SciTech Connect

    Trivedi, N.

    1992-03-01

    Interacting bosons in a random potential can be used to model {sup 3}He adsorbed in porous media, universal aspects of the superconductor-insulator transition in disordered films, and vortices in disordered type II superconductors. We study a model of bosons on a 2D square lattice with a random potential of strength V and on-site repulsion U. We first describe the path integral Monte Carlo algorithm used to simulate this system. The 2D quantum problem (at T=0) gets mapped onto a classical problem of strings or directed polymers moving in 3D with each string representing the world line of a boson. We discuss efficient ways of sampling the polymer configurations as well as the permutations between the bosons. We calculate the superfluid density and the excitation spectrum. Using these results we distinguish between a superfluid, a localized or ``Bose glass`` insulator with gapless excitations and a Mott insulator with a finite gap to excitations (found only at commensurate densities). We discover novel effects arising from the interpaly between V and U and present preliminary results for the phase diagram at incommensurate and commensurate densities.

  4. Monte Carlo simulation of light fluence calculation during pleural PDT.

    PubMed

    Meo, Julia L; Zhu, Timothy

    2013-02-02

    A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.

  5. Monte Carlo simulation of light fluence calculation during pleural PDT

    NASA Astrophysics Data System (ADS)

    Meo, Julia L.; Zhu, Timothy

    2013-03-01

    A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.

  6. Thermally activated repolarization of antiferromagnetic particles: Monte Carlo dynamics

    NASA Astrophysics Data System (ADS)

    Soloviev, S. V.; Popkov, A. F.; Knizhnik, A. A.; Iskandarova, I. M.

    2017-02-01

    Based on the equation of motion of an antiferromagnetic moment, taking into account a random field of thermal fluctuations, we propose a Monte Carlo (MC) scheme for the numerical simulation of the evolutionary dynamics of an antiferromagnetic particle, corresponding to the Langevin dynamics in the Kramers theory for the two-well potential. Conditions for the selection of the sphere of fluctuations of random deviations of the antiferromagnetic vector at an MC time step are found. A good agreement with the theory of Kramers thermal relaxation is demonstrated for varying temperatures and heights of energy barrier over a wide range of integration time steps in an overdamped regime. Based on the developed scheme, we performed illustrative calculations of the temperature drift of the exchange bias under the fast annealing of a ferromagnet-antiferromagnet structure, taking into account the random variation of anisotropy directions in antiferromagnetic grains and their sizes. The proposed approach offers promise for modeling magnetic sensors and spintronic memory devices containing heterostructures with antiferromagnetic layers.

  7. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  8. Monte Carlo Simulation of Endlinking Oligomers

    NASA Technical Reports Server (NTRS)

    Hinkley, Jeffrey A.; Young, Jennifer A.

    1998-01-01

    This report describes initial efforts to model the endlinking reaction of phenylethynyl-terminated oligomers. Several different molecular weights were simulated using the Bond Fluctuation Monte Carlo technique on a 20 x 20 x 20 unit lattice with periodic boundary conditions. After a monodisperse "melt" was equilibrated, chain ends were linked whenever they came within the allowed bond distance. Ends remained reactive throughout, so that multiple links were permitted. Even under these very liberal crosslinking assumptions, geometrical factors limited the degree of crosslinking. Average crosslink functionalities were 2.3 to 2.6; surprisingly, they did not depend strongly on the chain length. These results agreed well with the degrees of crosslinking inferred from experiment in a cured phenylethynyl-terminated polyimide oligomer.

  9. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  10. Chemical application of diffusion quantum Monte Carlo

    NASA Technical Reports Server (NTRS)

    Reynolds, P. J.; Lester, W. A., Jr.

    1984-01-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. This approach is receiving increasing attention in chemical applications as a result of its high accuracy. However, reducing statistical uncertainty remains a priority because chemical effects are often obtained as small differences of large numbers. As an example, the single-triplet splitting of the energy of the methylene molecule CH sub 2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on the VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX, are discussed. The computational time dependence obtained versus the number of basis functions is discussed and this is compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures.

  11. San Carlos Apache Tribe - Energy Organizational Analysis

    SciTech Connect

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  12. Monte Carlo calculation for microplanar beam radiography.

    PubMed

    Company, F Z; Allen, B J; Mino, C

    2000-09-01

    In radiography the scattered radiation from the off-target region decreases the contrast of the target image. We propose that a bundle of collimated, closely spaced, microplanar beams can reduce the scattered radiation and eliminate the effect of secondary electron dose, thus increasing the image dose contrast in the detector. The lateral and depth dose distributions of 20-200 keV microplanar beams are investigated using the EGS4 Monte Carlo code to calculate the depth doses and dose profiles in a 6 cm x 6 cm x 6 cm tissue phantom. The maximum dose on the primary beam axis (peak) and the minimum inter-beam scattered dose (valley) are compared at different photon energies and the optimum energy range for microbeam radiography is found. Results show that a bundle of closely spaced microplanar beams can give superior contrast imaging to a single macrobeam of the same overall area.

  13. Lunar Regolith Albedos Using Monte Carlos

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  14. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  15. Noncovalent Interactions by Quantum Monte Carlo.

    PubMed

    Dubecký, Matúš; Mitas, Lubos; Jurečka, Petr

    2016-05-11

    Quantum Monte Carlo (QMC) is a family of stochastic methods for solving quantum many-body problems such as the stationary Schrödinger equation. The review introduces basic notions of electronic structure QMC based on random walks in real space as well as its advances and adaptations to systems with noncovalent interactions. Specific issues such as fixed-node error cancellation, construction of trial wave functions, and efficiency considerations that allow for benchmark quality QMC energy differences are described in detail. Comprehensive overview of articles covers QMC applications to systems with noncovalent interactions over the last three decades. The current status of QMC with regard to efficiency, applicability, and usability by nonexperts together with further considerations about QMC developments, limitations, and unsolved challenges are discussed as well.

  16. Hybrid algorithms in quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Esler, Kenneth P.; McMinis, Jeremy; Morales, Miguel A.; Clark, Bryan K.; Shulenburger, Luke; Ceperley, David M.

    2012-12-01

    With advances in algorithms and growing computing powers, quantum Monte Carlo (QMC) methods have become a leading contender for high accuracy calculations for the electronic structure of realistic systems. The performance gain on recent HPC systems is largely driven by increasing parallelism: the number of compute cores of a SMP and the number of SMPs have been going up, as the Top500 list attests. However, the available memory as well as the communication and memory bandwidth per element has not kept pace with the increasing parallelism. This severely limits the applicability of QMC and the problem size it can handle. OpenMP/MPI hybrid programming provides applications with simple but effective solutions to overcome efficiency and scalability bottlenecks on large-scale clusters based on multi/many-core SMPs. We discuss the design and implementation of hybrid methods in QMCPACK and analyze its performance on current HPC platforms characterized by various memory and communication hierarchies.

  17. Monte Carlo applications to acoustical field solutions

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.; Thanedar, B. D.

    1973-01-01

    The Monte Carlo technique is proposed for the determination of the acoustical pressure-time history at chosen points in a partial enclosure, the central idea of this technique being the tracing of acoustical rays. A statistical model is formulated and an algorithm for pressure is developed, the conformity of which is examined by two approaches and is shown to give the known results. The concepts that are developed are applied to the determination of the transient field due to a sound source in a homogeneous medium in a rectangular enclosure with perfect reflecting walls, and the results are compared with those presented by Mintzer based on the Laplace transform approach, as well as with a normal mode solution.

  18. Monte Carlo modeling and meteor showers

    NASA Technical Reports Server (NTRS)

    Kulikova, N. V.

    1987-01-01

    Prediction of short lived increases in the cosmic dust influx, the concentration in lower thermosphere of atoms and ions of meteor origin and the determination of the frequency of micrometeor impacts on spacecraft are all of scientific and practical interest and all require adequate models of meteor showers at an early stage of their existence. A Monte Carlo model of meteor matter ejection from a parent body at any point of space was worked out by other researchers. This scheme is described. According to the scheme, the formation of ten well known meteor streams was simulated and the possibility of genetic affinity of each of them with the most probable parent comet was analyzed. Some of the results are presented.

  19. Green's function Monte Carlo in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    We review the status of Green's Function Monte Carlo (GFMC) methods as applied to problems in nuclear physics. New methods have been developed to handle the spin and isospin degrees of freedom that are a vital part of any realistic nuclear physics problem, whether at the level of quarks or nucleons. We discuss these methods and then summarize results obtained recently for light nuclei, including ground state energies, three-body forces, charge form factors and the coulomb sum. As an illustration of the applicability of GFMC to quark models, we also consider the possible existence of bound exotic multi-quark states within the framework of flux-tube quark models. 44 refs., 8 figs., 1 tab.

  20. Resist develop prediction by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sohn, Dong-Soo; Jeon, Kyoung-Ah; Sohn, Young-Soo; Oh, Hye-Keun

    2002-07-01

    Various resist develop models have been suggested to express the phenomena from the pioneering work of Dill's model in 1975 to the recent Shipley's enhanced notch model. The statistical Monte Carlo method can be applied to the process such as development and post exposure bake. The motions of developer during development process were traced by using this method. We have considered that the surface edge roughness of the resist depends on the weight percentage of protected and de-protected polymer in the resist. The results are well agreed with other papers. This study can be helpful for the developing of new photoresist and developer that can be used to pattern the device features smaller than 100 nm.

  1. Parallel tempering Monte Carlo in LAMMPS.

    SciTech Connect

    Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.

    2003-11-01

    We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.

  2. Geometric Monte Carlo and black Janus geometries

    NASA Astrophysics Data System (ADS)

    Bak, Dongsu; Kim, Chanju; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2017-04-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  3. Monte Carlo simulations of medical imaging modalities

    SciTech Connect

    Estes, G.P.

    1998-09-01

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  4. Exploring theory space with Monte Carlo reweighting

    SciTech Connect

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.

  5. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  6. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  7. CARLOS: Computer-Assisted Instruction in Spanish at Dartmouth College.

    ERIC Educational Resources Information Center

    Turner, Ronald C.

    The computer-assisted instruction project in review Spanish, Computer-Assisted Review Lessons on Syntax (CARLOS), initiated at Dartmouth College in 1967-68, is described here. Tables are provided showing the results of the experiment on the basis of aptitude and achievement tests, and the procedure for implementing CARLOS as well as its place in…

  8. 33 CFR 117.267 - Big Carlos Pass.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Big Carlos Pass. 117.267 Section 117.267 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Florida § 117.267 Big Carlos Pass. The draw of...

  9. 4. Photographic copy of map. San Carlos Irrigation Project, Gila ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Photographic copy of map. San Carlos Irrigation Project, Gila River Indian Reservation, Pinal County, Arizona. Department of the Interior. Office of Indian Affairs. 1940. (Source: SCIP Office, Coolidge, AZ) Photograph is an 8'x10' enlargement from a 4'x5' negative. - San Carlos Irrigation Project, Lands North & South of Gila River, Coolidge, Pinal County, AZ

  10. Accelerated GPU based SPECT Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  11. Monte Carlo scatter correction for SPECT

    NASA Astrophysics Data System (ADS)

    Liu, Zemei

    The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.

  12. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  13. Parton shower Monte Carlo event generators

    NASA Astrophysics Data System (ADS)

    Webber, Bryan

    2011-12-01

    A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.

  14. Fission Matrix Capability for MCNP Monte Carlo

    SciTech Connect

    Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  15. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  16. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    NASA Astrophysics Data System (ADS)

    Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.

    2017-03-01

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  17. Top quark mass measurement in the lepton + jets channel using a matrix element method and in situ jet energy calibration.

    PubMed

    Aaltonen, T; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirby, M; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Lockyer, N S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Fernandez, P Movilla; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rubbo, F; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Siegrist, J; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; Denis, R St; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Volobouev, I; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S

    2010-12-17

    A precision measurement of the top quark mass m(t) is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m(t) and a parameter Δ(JES) used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6 fb(-1) of integrated luminosity, a value of m(t)=173.0 ± 1.2 GeV/c(2) is measured.

  18. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Brucken, E.; Devoto, F.; Mehtala, P.; Orava, R.; Alvarez Gonzalez, B.; Casal, B.; Gomez, G.; Palencia, E.; Rodrigo, T.; Ruiz, A.; Scodellaro, L.; Vila, I.; Vilar, R.; Amerio, S.; Dorigo, T.; Gresele, A.; Lazzizzera, I.; Amidei, D.; Campbell, M.

    2010-12-17

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6 fb{sup -1} of integrated luminosity, a value of m{sub t}=173.0{+-}1.2 GeV/c{sup 2} is measured.

  19. Top Quark Mass Measurement in the lepton+jets Channel Using a Matrix Element Method and in situ Jet Energy Calibration

    NASA Astrophysics Data System (ADS)

    Aaltonen, T.; Álvarez González, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J. A.; Apresyan, A.; Arisawa, T.; Artikov, A.; Asaadi, J.; Ashmanskas, W.; Auerbach, B.; Aurisano, A.; Azfar, F.; Badgett, W.; Barbaro-Galtieri, A.; Barnes, V. E.; Barnett, B. A.; Barria, P.; Bartos, P.; Bauce, M.; Bauer, G.; Bedeschi, F.; Beecher, D.; Behari, S.; Bellettini, G.; Bellinger, J.; Benjamin, D.; Beretvas, A.; Bhatti, A.; Binkley, M.; Bisello, D.; Bizjak, I.; Bland, K. R.; Blumenfeld, B.; Bocci, A.; Bodek, A.; Bortoletto, D.; Boudreau, J.; Boveia, A.; Brau, B.; Brigliadori, L.; Brisuda, A.; Bromberg, C.; Brucken, E.; Bucciantonio, M.; Budagov, J.; Budd, H. S.; Budd, S.; Burkett, K.; Busetto, G.; Bussey, P.; Buzatu, A.; Calancha, C.; Camarda, S.; Campanelli, M.; Campbell, M.; Canelli, F.; Canepa, A.; Carls, B.; Carlsmith, D.; Carosi, R.; Carrillo, S.; Carron, S.; Casal, B.; Casarsa, M.; Castro, A.; Catastini, P.; Cauz, D.; Cavaliere, V.; Cavalli-Sforza, M.; Cerri, A.; Cerrito, L.; Chen, Y. C.; Chertok, M.; Chiarelli, G.; Chlachidze, G.; Chlebana, F.; Cho, K.; Chokheli, D.; Chou, J. P.; Chung, W. H.; Chung, Y. S.; Ciobanu, C. I.; Ciocci, M. A.; Clark, A.; Compostella, G.; Convery, M. E.; Conway, J.; Corbo, M.; Cordelli, M.; Cox, C. A.; Cox, D. J.; Crescioli, F.; Cuenca Almenar, C.; Cuevas, J.; Culbertson, R.; Dagenhart, D.; D'Ascenzo, N.; Datta, M.; de Barbaro, P.; de Cecco, S.; de Lorenzo, G.; Dell'Orso, M.; Deluca, C.; Demortier, L.; Deng, J.; Deninno, M.; Devoto, F.; D'Errico, M.; di Canto, A.; di Ruzza, B.; Dittmann, J. R.; D'Onofrio, M.; Donati, S.; Dong, P.; Dorigo, T.; Ebina, K.; Elagin, A.; Eppig, A.; Erbacher, R.; Errede, D.; Errede, S.; Ershaidat, N.; Eusebi, R.; Fang, H. C.; Farrington, S.; Feindt, M.; Fernandez, J. P.; Ferrazza, C.; Field, R.; Flanagan, G.; Forrest, R.; Frank, M. J.; Franklin, M.; Freeman, J. C.; Furic, I.; Gallinaro, M.; Galyardt, J.; Garcia, J. E.; Garfinkel, A. F.; Garosi, P.; Gerberich, H.; Gerchtein, E.; Giagu, S.; Giakoumopoulou, V.; Giannetti, P.; Gibson, K.; Ginsburg, C. M.; Giokaris, N.; Giromini, P.; Giunta, M.; Giurgiu, G.; Glagolev, V.; Glenzinski, D.; Gold, M.; Goldin, D.; Goldschmidt, N.; Golossanov, A.; Gomez, G.; Gomez-Ceballos, G.; Goncharov, M.; González, O.; Gorelov, I.; Goshaw, A. T.; Goulianos, K.; Gresele, A.; Grinstein, S.; Grosso-Pilcher, C.; Group, R. C.; Guimaraes da Costa, J.; Gunay-Unalan, Z.; Haber, C.; Hahn, S. R.; Halkiadakis, E.; Hamaguchi, A.; Han, J. Y.; Happacher, F.; Hara, K.; Hare, D.; Hare, M.; Harr, R. F.; Hatakeyama, K.; Hays, C.; Heck, M.; Heinrich, J.; Herndon, M.; Hewamanage, S.; Hidas, D.; Hocker, A.; Hopkins, W.; Horn, D.; Hou, S.; Hughes, R. E.; Hurwitz, M.; Husemann, U.; Hussain, N.; Hussein, M.; Huston, J.; Introzzi, G.; Iori, M.; Ivanov, A.; James, E.; Jang, D.; Jayatilaka, B.; Jeon, E. J.; Jha, M. K.; Jindariani, S.; Johnson, W.; Jones, M.; Joo, K. K.; Jun, S. Y.; Junk, T. R.; Kamon, T.; Karchin, P. E.; Kato, Y.; Ketchum, W.; Keung, J.; Khotilovich, V.; Kilminster, B.; Kim, D. H.; Kim, H. S.; Kim, H. W.; Kim, J. E.; Kim, M. J.; Kim, S. B.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kirby, M.; Klimenko, S.; Kondo, K.; Kong, D. J.; Konigsberg, J.; Kotwal, A. V.; Kreps, M.; Kroll, J.; Krop, D.; Krumnack, N.; Kruse, M.; Krutelyov, V.; Kuhr, T.; Kurata, M.; Kwang, S.; Laasanen, A. T.; Lami, S.; Lammel, S.; Lancaster, M.; Lander, R. L.; Lannon, K.; Lath, A.; Latino, G.; Lazzizzera, I.; Lecompte, T.; Lee, E.; Lee, H. S.; Lee, J. S.; Lee, S. W.; Leo, S.; Leone, S.; Lewis, J. D.; Lin, C.-J.; Linacre, J.; Lindgren, M.; Lipeles, E.; Lister, A.; Litvintsev, D. O.; Liu, C.; Liu, Q.; Liu, T.; Lockwitz, S.; Lockyer, N. S.; Loginov, A.; Lucchesi, D.; Lueck, J.; Lujan, P.; Lukens, P.; Lungu, G.; Lys, J.; Lysak, R.; Madrak, R.; Maeshima, K.; Makhoul, K.; Maksimovic, P.; Malik, S.; Manca, G.; Manousakis-Katsikakis, A.; Margaroli, F.; Marino, C.; Martínez, M.; Martínez-Ballarín, R.; Mastrandrea, P.; Mathis, M.; Mattson, M. E.; Mazzanti, P.; McFarland, K. S.; McIntyre, P.; McNulty, R.; Mehta, A.; Mehtala, P.; Menzione, A.; Mesropian, C.; Miao, T.; Mietlicki, D.; Mitra, A.; Miyake, H.; Moed, S.; Moggi, N.; Mondragon, M. N.; Moon, C. S.; Moore, R.; Morello, M. J.; Morlock, J.; Movilla Fernandez, P.; Mukherjee, A.; Muller, Th.; Murat, P.; Mussini, M.; Nachtman, J.; Nagai, Y.; Naganoma, J.; Nakano, I.; Napier, A.; Nett, J.; Neu, C.; Neubauer, M. S.; Nielsen, J.; Nodulman, L.; Norniella, O.; Nurse, E.; Oakes, L.; Oh, S. H.; Oh, Y. D.; Oksuzian, I.; Okusawa, T.; Orava, R.; Ortolan, L.; Pagan Griso, S.; Pagliarone, C.; Palencia, E.; Papadimitriou, V.; Paramonov, A. A.; Patrick, J.; Pauletta, G.; Paulini, M.; Paus, C.; Pellett, D. E.; Penzo, A.; Phillips, T. J.; Piacentino, G.; Pianori, E.; Pilot, J.; Pitts, K.; Plager, C.; Pondrom, L.; Potamianos, K.; Poukhov, O.; Prokoshin, F.; Pronko, A.; Ptohos, F.; Pueschel, E.; Punzi, G.; Pursley, J.; Rahaman, A.; Ramakrishnan, V.; Ranjan, N.; Redondo, I.; Renton, P.; Rescigno, M.; Rimondi, F.; Ristori, L.; Robson, A.; Rodrigo, T.; Rodriguez, T.; Rogers, E.; Rolli, S.; Roser, R.; Rossi, M.; Rubbo, F.; Ruffini, F.; Ruiz, A.; Russ, J.; Rusu, V.; Safonov, A.; Sakumoto, W. K.; Santi, L.; Sartori, L.; Sato, K.; Saveliev, V.; Savoy-Navarro, A.; Schlabach, P.; Schmidt, A.; Schmidt, E. E.; Schmidt, M. P.; Schmitt, M.; Schwarz, T.; Scodellaro, L.; Scribano, A.; Scuri, F.; Sedov, A.; Seidel, S.; Seiya, Y.; Semenov, A.; Sforza, F.; Sfyrla, A.; Shalhout, S. Z.; Shears, T.; Shepard, P. F.; Shimojima, M.; Shiraishi, S.; Shochet, M.; Shreyber, I.; Siegrist, J.; Simonenko, A.; Sinervo, P.; Sissakian, A.; Sliwa, K.; Smith, J. R.; Snider, F. D.; Soha, A.; Somalwar, S.; Sorin, V.; Squillacioti, P.; Stanitzki, M.; Denis, R. St.; Stelzer, B.; Stelzer-Chilton, O.; Stentz, D.; Strologas, J.; Strycker, G. L.; Sudo, Y.; Sukhanov, A.; Suslov, I.; Takemasa, K.; Takeuchi, Y.; Tang, J.; Tecchio, M.; Teng, P. K.; Thom, J.; Thome, J.; Thompson, G. A.; Thomson, E.; Ttito-Guzmán, P.; Tkaczyk, S.; Toback, D.; Tokar, S.; Tollefson, K.; Tomura, T.; Tonelli, D.; Torre, S.; Torretta, D.; Totaro, P.; Trovato, M.; Tu, Y.; Turini, N.; Ukegawa, F.; Uozumi, S.; Varganov, A.; Vataga, E.; Vázquez, F.; Velev, G.; Vellidis, C.; Vidal, M.; Vila, I.; Vilar, R.; Volobouev, I.; Vogel, M.; Volpi, G.; Wagner, P.; Wagner, R. L.; Wakisaka, T.; Wallny, R.; Wang, S. M.; Warburton, A.; Waters, D.; Weinberger, M.; Wester, W. C., III; Whitehouse, B.; Whiteson, D.; Wicklund, A. B.; Wicklund, E.; Wilbur, S.; Wick, F.; Williams, H. H.; Wilson, J. S.; Wilson, P.; Winer, B. L.; Wittich, P.; Wolbers, S.; Wolfe, H.; Wright, T.; Wu, X.; Wu, Z.; Yamamoto, K.; Yamaoka, J.; Yang, T.; Yang, U. K.; Yang, Y. C.; Yao, W.-M.; Yeh, G. P.; Yi, K.; Yoh, J.; Yorita, K.; Yoshida, T.; Yu, G. B.; Yu, I.; Yu, S. S.; Yun, J. C.; Zanetti, A.; Zeng, Y.; Zucchelli, S.

    2010-12-01

    A precision measurement of the top quark mass mt is obtained using a sample of tt¯ events from pp¯ collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of mt and a parameter ΔJES used to calibrate the jet energy scale in situ. Using a total of 1087 events in 5.6fb-1 of integrated luminosity, a value of mt=173.0±1.2GeV/c2 is measured.

  20. Ab initio quantum Monte Carlo simulations of the uniform electron gas without fixed nodes

    NASA Astrophysics Data System (ADS)

    Groth, S.; Schoof, T.; Dornheim, T.; Bonitz, M.

    2016-02-01

    The uniform electron gas (UEG) at finite temperature is of key relevance for many applications in the warm dense matter regime, e.g., dense plasmas and laser excited solids. Also, the quality of density functional theory calculations crucially relies on the availability of accurate data for the exchange-correlation energy. Recently, results for N =33 spin-polarized electrons at high density, rs=r ¯/aB≲4 , and low temperature have been obtained with the configuration path integral Monte Carlo (CPIMC) method [T. Schoof et al., Phys. Rev. Lett. 115, 130402 (2015), 10.1103/PhysRevLett.115.130402]. To achieve these results, the original CPIMC algorithm [T. Schoof et al., Contrib. Plasma Phys. 51, 687 (2011), 10.1002/ctpp.201100012] had to be further optimized to cope with the fermion sign problem (FSP). It is the purpose of this paper to give detailed information on the manifestation of the FSP in CPIMC simulations of the UEG and to demonstrate how it can be turned into a controllable convergence problem. In addition, we present new thermodynamic results for higher temperatures. Finally, to overcome the limitations of CPIMC towards strong coupling, we invoke an independent method—the recently developed permutation blocking path integral Monte Carlo approach [T. Dornheim et al., J. Chem. Phys. 143, 204101 (2015), 10.1063/1.4936145]. The combination of both approaches is able to yield ab initio data for the UEG over the entire density range, above a temperature of about one half of the Fermi temperature. Comparison with restricted path integral Monte Carlo data [E. W. Brown et al., Phys. Rev. Lett. 110, 146405 (2013), 10.1103/PhysRevLett.110.146405] allows us to quantify the systematic error arising from the free particle nodes.

  1. Equation of state of metallic hydrogen from coupled electron-ion Monte Carlo simulations.

    PubMed

    Morales, Miguel A; Pierleoni, Carlo; Ceperley, D M

    2010-02-01

    We present a study of hydrogen at pressures higher than molecular dissociation using the coupled electron-ion Monte Carlo method. These calculations use the accurate reptation quantum Monte Carlo method to estimate the electronic energy and pressure while doing a Monte Carlo simulation of the protons. In addition to presenting simulation results for the equation of state over a large region of the phase diagram, we report the free energy obtained by thermodynamic integration. We find very good agreement with density-functional theory based molecular-dynamics calculations for pressures beyond 600 GPa and densities above rho=1.4 g/cm(3) , both for thermodynamic and structural properties. This agreement provides a strong support to the different approximations employed in the density-functional treatment of the system, specifically the approximate exchange-correlation potential and the use of pseudopotentials for the range of densities considered. We find disagreement with chemical models, which suggests that a reinvestigation of planetary models--previously constructed using the Saumon-Chabrier-Van Horn equations of state--might be needed.

  2. Methods for Monte Carlo simulation of the exospheres of the moon and Mercury

    NASA Technical Reports Server (NTRS)

    Hodges, R. R., Jr.

    1980-01-01

    A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.

  3. Accelerated Monte Carlo models to simulate fluorescence spectra from layered tissues.

    PubMed

    Swartling, Johannes; Pifferi, Antonio; Enejder, Annika M K; Andersson-Engels, Stefan

    2003-04-01

    Two efficient Monte Carlo models are described, facilitating predictions of complete time-resolved fluorescence spectra from a light-scattering and light-absorbing medium. These are compared with a third, conventional fluorescence Monte Carlo model in terms of accuracy, signal-to-noise statistics, and simulation time. The improved computation efficiency is achieved by means of a convolution technique, justified by the symmetry of the problem. Furthermore, the reciprocity principle for photon paths, employed in one of the accelerated models, is shown to simplify the computations of the distribution of the emitted fluorescence drastically. A so-called white Monte Carlo approach is finally suggested for efficient simulations of one excitation wavelength combined with a wide range of emission wavelengths. The fluorescence is simulated in a purely scattering medium, and the absorption properties are instead taken into account analytically afterward. This approach is applicable to the conventional model as well as to the two accelerated models. Essentially the same absolute values for the fluorescence integrated over the emitting surface and time are obtained for the three models within the accuracy of the simulations. The time-resolved and spatially resolved fluorescence exhibits a slight overestimation at short delay times close to the source corresponding to approximately two grid elements for the accelerated models, as a result of the discretization and the convolution. The improved efficiency is most prominent for the reverse-emission accelerated model, for which the simulation time can be reduced by up to two orders of magnitude.

  4. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  5. Theory and software for large Quantum Monte Carlo super-computer simulations over exponential type orbitals.

    NASA Astrophysics Data System (ADS)

    Hoggan, Philip E.

    2009-03-01

    Slater-type orbitals (STO) are rarely used as atomic basis sets for molecular structure and property calculations, since integrals are expensive to evaluate, reliable basis sets are scarce and exact properties such as Kato's cusp condition and the correct exponential decay of the electron density are not significantly better described numerically than with commonly used Gaussian basis sets. We adopt the systematic parallelized development of integration routines for multi-centre integrals, and high-quality basis sets over STOs, useful for modern electron correlation calculations via compact low-variance trial wave-functions for QMC (Quantum Monte Carlo). Molecular QMC applications are also rare, because the method is comparatively complicated to use, however it is extremely precise and can be made to include nearly all the correlation energy. It also scales well for large numbers of processors (1000 s at nearly 100 percent efficiency). Applications need to be carried out on a large scale, to determine electronic structure and properties of large (about 100 atoms) molecules of chemical interest, including intermolecular interactions, best described using Slater trial wave-functions for QMC. Such functions combined as hydrogen-like atomic orbitals possess the correct nodal structure for the high precision FN-MC (Fixed Node Monte Carlo) methods, which include more than 95 percent of the electron correlation energy.

  6. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing density functional theory (DFT) and quantum Monte Carlo (QMC) treatments. The method is applied to address the longstanding discrepancy between DFT calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, in contrast to DAC data.

  7. Solution of the radiative transfer theory problems by the Monte Carlo method

    NASA Technical Reports Server (NTRS)

    Marchuk, G. I.; Mikhailov, G. A.

    1974-01-01

    The Monte Carlo method is used for two types of problems. First, there are interpretation problems of optical observations from meteorological satellites in the short wave part of the spectrum. The sphericity of the atmosphere, the propagation function, and light polarization are considered. Second, problems dealt with the theory of spreading narrow light beams. Direct simulation of light scattering and the mathematical form of medium radiation model representation are discussed, and general integral transfer equations are calculated. The dependent tests method, derivative estimates, and solution to the inverse problem are also considered.

  8. Monte-Carlo particle dynamics in a variable specific impulse magnetoplasma rocket

    SciTech Connect

    Ilin, A.V.; Diaz, F.R.C.; Squire, J.P.

    1999-01-01

    The self-consistent mathematical model in a Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is examined. Of particular importance is the effect of a magnetic nozzle in enhancing the axial momentum of the exhaust. Also, different geometries and rocket symmetries are considered. The magnetic configuration is modeled with an adaptable mesh, which increases accuracy without compromising the speed of the simulation. The single particle trajectories are integrated with an adaptive time-scheme, which can quickly solve extensive Monte-Carlo simulations for systems of hundred thousands of particles in a reasonable time (1--2 hours) and without the need for a powerful supercomputer.

  9. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    SciTech Connect

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, a finding in stark contrast to DAC data.

  10. Efficient approach to the free energy of crystals via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Navascués, G.; Velasco, E.

    2015-08-01

    We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.

  11. The INTEGRAL scatterometer SPI

    NASA Technical Reports Server (NTRS)

    Mandrou, P.; Vedrenne, G.; Jean, P.; Kandel, B.; vonBallmoos, P.; Albernhe, F.; Lichti, G.; Schoenfelder, V.; Diehl, R.; Georgii, R.; Teegarden, B.; Mandrou, P.; Vedrenne, G.; Kirchner, T.; Durouchoux, P.; Cordier, B.; Diallo, N.; Sanchez, F.; Payne, B.; Leleux, P.; Caraveo, P.; Matteson, J.; Slassi-Sennon, S.; Lin, R. P.; Skinner, G.

    1997-01-01

    The INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) mission's onboard spectrometer, the INTEGRAL spectrometer (SPI), is described. The SPI constitutes one of the four main mission instruments. It is optimized for detailed measurements of gamma ray lines and for the mapping of diffuse sources. It combines a coded aperture mask with an array of large volume, high purity germanium detectors. The detectors make precise measurements of the gamma ray energies over the 20 keV to 8 MeV range. The instrument's characteristics are described and the Monte Carlo simulation of its performance is outlined. It will be possible to study gamma ray emission from compact objects or line profiles with a high energy resolution and a high angular resolution.

  12. Transitions between imperfectly ordered crystalline structures: a phase switch Monte Carlo study.

    PubMed

    Wilms, Dorothea; Wilding, Nigel B; Binder, Kurt

    2012-05-01

    A model for two-dimensional colloids confined laterally by "structured boundaries" (i.e., ones that impose a periodicity along the slit) is studied by Monte Carlo simulations. When the distance D between the confining walls is reduced at constant particle number from an initial value D(0), for which a crystalline structure commensurate with the imposed periodicity fits, to smaller values, a succession of phase transitions to imperfectly ordered structures occur. These structures have a reduced number of rows parallel to the boundaries (from n to n-1 to n-2, etc.) and are accompanied by an almost periodic strain pattern, due to "soliton staircases" along the boundaries. Since standard simulation studies of such transitions are hampered by huge hysteresis effects, we apply the phase switch Monte Carlo method to estimate the free energy difference between the structures as a function of the misfit between D and D(0), thereby locating where the transitions occur in equilibrium. For comparison, we also obtain this free energy difference from a thermodynamic integration method: The results agree, but the effort required to obtain the same accuracy as provided by phase switch Monte Carlo would be at least three orders of magnitude larger. We also show for a situation where several "candidate structures" exist for a phase, that phase switch Monte Carlo can clearly distinguish the metastable structures from the stable one. Finally, applying the method in the conjugate statistical ensemble (where the normal pressure conjugate to D is taken as an independent control variable), we show that the standard equivalence between the conjugate ensembles of statistical mechanics is violated.

  13. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  14. Feynman integral and perturbation theory in quantum tomography

    NASA Astrophysics Data System (ADS)

    Fedorov, Aleksey

    2013-11-01

    We present a definition for tomographic Feynman path integral as representation for quantum tomograms via Feynman path integral in the phase space. The proposed representation is the potential basis for investigation of Path Integral Monte Carlo numerical methods with quantum tomograms. Tomographic Feynman path integral is a representation of solution of initial problem for evolution equation for tomograms. The perturbation theory for quantum tomograms is constructed.

  15. Exponential Monte Carlo Convergence on a Homogeneous Right Parallelepiped Using the Reduced Source Method with Legendre Expansion

    SciTech Connect

    Favorite, J.A.

    1999-09-01

    In previous work, exponential convergence of Monte Carlo solutions using the reduced source method with Legendre expansion has been achieved only in one-dimensional rod and slab geometries. In this paper, the method is applied to three-dimensional (right parallelepiped) problems, with resulting evidence suggesting success. As implemented in this paper, the method approximates an angular integral of the flux with a discrete-ordinates numerical quadrature. It is possible that this approximation introduces an inconsistency that must be addressed.

  16. Monte Carlo Simulation in the Optimization of a Free-Air Ionization Chamber for Dosimetric Control in Medical Digital Radiography

    SciTech Connect

    Leyva, A.; Pinera, I.; Abreu, Y.; Cruz, C. M.; Montano, L. M.

    2008-08-11

    During the earliest tests of a free-air ionization chamber a poor response to the X-rays emitted by several sources was observed. Then, the Monte Carlo simulation of X-rays transport in matter was employed in order to evaluate chamber behavior as X-rays detector. The photons energy deposition dependence with depth and its integral value in all active volume were calculated. The obtained results reveal that the designed device geometry is feasible to be optimized.

  17. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  18. Monte Carlo simulation of chromatin stretching

    NASA Astrophysics Data System (ADS)

    Aumann, Frank; Lankas, Filip; Caudron, Maïwen; Langowski, Jörg

    2006-04-01

    We present Monte Carlo (MC) simulations of the stretching of a single 30nm chromatin fiber. The model approximates the DNA by a flexible polymer chain with Debye-Hückel electrostatics and uses a two-angle zigzag model for the geometry of the linker DNA connecting the nucleosomes. The latter are represented by flat disks interacting via an attractive Gay-Berne potential. Our results show that the stiffness of the chromatin fiber strongly depends on the linker DNA length. Furthermore, changing the twisting angle between nucleosomes from 90° to 130° increases the stiffness significantly. An increase in the opening angle from 22° to 34° leads to softer fibers for small linker lengths. We observe that fibers containing a linker histone at each nucleosome are stiffer compared to those without the linker histone. The simulated persistence lengths and elastic moduli agree with experimental data. Finally, we show that the chromatin fiber does not behave as an isotropic elastic rod, but its rigidity depends on the direction of deformation: Chromatin is much more resistant to stretching than to bending.

  19. Monte Carlo simulations of Protein Adsorption

    NASA Astrophysics Data System (ADS)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  20. Finding Planet Nine: a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-06-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30°, and an argument of perihelion of 150°. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of the four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal anti-alignment scenario. In addition and after studying the current statistics of ETNOs, a cautionary note on the robustness of the perihelia clustering is presented.

  1. Classical Trajectory and Monte Carlo Techniques

    NASA Astrophysics Data System (ADS)

    Olson, Ronald

    The classical trajectory Monte Carlo (CTMC) method originated with Hirschfelder, who studied the H + D2 exchange reaction using a mechanical calculator [58.1]. With the availability of computers, the CTMC method was actively applied to a large number of chemical systems to determine reaction rates, and final state vibrational and rotational populations (see, e.g., Karplus et al. [58.2]). For atomic physics problems, a major step was introduced by Abrines and Percival [58.3] who employed Kepler's equations and the Bohr-Sommerfield model for atomic hydrogen to investigate electron capture and ionization for intermediate velocity collisions of H+ + H. An excellent description is given by Percival and Richards [58.4]. The CTMC method has a wide range of applicability to strongly-coupled systems, such as collisions by multiply-charged ions [58.5]. In such systems, perturbation methods fail, and basis set limitations of coupled-channel molecular- and atomic-orbital techniques have difficulty in representing the multitude of activeexcitation, electron capture, and ionization channels. Vector- and parallel-processors now allow increasingly detailed study of the dynamics of the heavy projectile and target, along with the active electrons.

  2. Monte Carlo Simulation of Surface Reactions

    NASA Astrophysics Data System (ADS)

    Brosilow, Benjamin J.

    A Monte-Carlo study of the catalytic reaction of CO and O_2 over transition metal surfaces is presented, using generalizations of a model proposed by Ziff, Gulari and Barshad (ZGB). A new "constant -coverage" algorithm is described and applied to the model in order to elucidate the behavior near the model's first -order transition, and to draw an analogy between this transition and first-order phase transitions in equilibrium systems. The behavior of the model is then compared to the behavior of CO oxidation systems over Pt single-crystal catalysts. This comparison leads to the introduction of a new variation of the model in which one of the reacting species requires a large ensemble of vacant surface sites in order to adsorb. Further, it is shown that precursor adsorption and an effective Eley-Rideal mechanism must also be included in the model in order to obtain detailed agreement with experiment. Finally, variations of the model on finite and two component lattices are studied as models for low temperature CO oxidation over Noble Metal/Reducible Oxide and alloy catalysts.

  3. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  4. Commensurabilities between ETNOs: a Monte Carlo survey

    NASA Astrophysics Data System (ADS)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  5. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    SciTech Connect

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  6. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    PubMed

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  7. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    SciTech Connect

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.

  8. OBJECT KINETIC MONTE CARLO SIMULATIONS OF CASCADE ANNEALING IN TUNGSTEN

    SciTech Connect

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.; Roche, Kenneth J.; Kurtz, Richard J.; Wirth, Brian D.

    2014-03-31

    The objective of this work is to study the annealing of primary cascade damage created by primary knock-on atoms (PKAs) of various energies, at various temperatures in bulk tungsten using the object kinetic Monte Carlo (OKMC) method.

  9. Monte Carlo simulations: Hidden errors from ``good'' random number generators

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna

    1992-12-01

    The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

  10. Monte Carlo next-event estimates from thermal collisions

    SciTech Connect

    Hendricks, J.S.; Prael, R.E.

    1990-01-01

    A new approximate method has been developed by Richard E. Prael to allow S({alpha},{beta}) thermal collision contributions to next-event estimators in Monte Carlo calculations. The new technique is generally applicable to next-event estimator contributions from any discrete probability distribution. The method has been incorporated into Version 4 of the production Monte Carlo neutron and photon radiation transport code MCNP. 9 refs.

  11. Multiscale Monte Carlo equilibration: Pure Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Endres, Michael G.; Brower, Richard C.; Detmold, William; Orginos, Kostas; Pochinsky, Andrew V.

    2015-12-01

    We present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.

  12. Development of Monte Carlo Capability for Orion Parachute Simulations

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    Parachute test programs employ Monte Carlo simulation techniques to plan testing and make critical decisions related to parachute loads, rate-of-descent, or other parameters. This paper describes the development and use of a MATLAB-based Monte Carlo tool for three parachute drop test simulations currently used by NASA. The Decelerator System Simulation (DSS) is a legacy 6 Degree-of-Freedom (DOF) simulation used to predict parachute loads and descent trajectories. The Decelerator System Simulation Application (DSSA) is a 6-DOF simulation that is well suited for modeling aircraft extraction and descent of pallet-like test vehicles. The Drop Test Vehicle Simulation (DTVSim) is a 2-DOF trajectory simulation that is convenient for quick turn-around analysis tasks. These three tools have significantly different software architectures and do not share common input files or output data structures. Separate Monte Carlo tools were initially developed for each simulation. A recently-developed simulation output structure enables the use of the more sophisticated DSSA Monte Carlo tool with any of the core-simulations. The task of configuring the inputs for the nominal simulation is left to the existing tools. Once the nominal simulation is configured, the Monte Carlo tool perturbs the input set according to dispersion rules created by the analyst. These rules define the statistical distribution and parameters to be applied to each simulation input. Individual dispersed parameters are combined to create a dispersed set of simulation inputs. The Monte Carlo tool repeatedly executes the core-simulation with the dispersed inputs and stores the results for analysis. The analyst may define conditions on one or more output parameters at which to collect data slices. The tool provides a versatile interface for reviewing output of large Monte Carlo data sets while preserving the capability for detailed examination of individual dispersed trajectories. The Monte Carlo tool described in

  13. Improved Collision Modeling for Direct Simulation Monte Carlo Methods

    DTIC Science & Technology

    2011-03-01

    number is a measure of the rarefaction of a gas , and will be explained more thoroughly in the following chap- ter. Continuum solvers that use Navier...Limits on Mathematical Models [4] Kn=0.1, and the flow can be considered rarefied above that value. Direct Simulation Monte Carlo (DSMC) is a stochastic...method which utilizes the Monte Carlo statistical model to simulate gas behavior, which is very useful for these rarefied atmosphere hypersonic

  14. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  15. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  16. CosmoPMC: Cosmology sampling with Population Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren

    2012-12-01

    CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.

  17. Green's function Monte Carlo calculations of /sup 4/He

    SciTech Connect

    Carlson, J.A.

    1988-01-01

    Green's Function Monte Carlo methods have been developed to study the ground state properties of light nuclei. These methods are shown to reproduce results of Faddeev calculations for A = 3, and are then used to calculate ground state energies, one- and two-body distribution functions, and the D-state probability for the alpha particle. Results are compared to variational Monte Carlo calculations for several nuclear interaction models. 31 refs.

  18. Successful combination of the stochastic linearization and Monte Carlo methods

    NASA Technical Reports Server (NTRS)

    Elishakoff, I.; Colombi, P.

    1993-01-01

    A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

  19. de Finetti Priors using Markov chain Monte Carlo computations.

    PubMed

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-07-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.

  20. de Finetti Priors using Markov chain Monte Carlo computations

    PubMed Central

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-01-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases. PMID:26412947

  1. Monte Carlo methods and applications in nuclear physics

    SciTech Connect

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  2. Event-chain Monte Carlo for classical continuous spin models

    NASA Astrophysics Data System (ADS)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  3. DPEMC: A Monte Carlo for double diffraction

    NASA Astrophysics Data System (ADS)

    Boonekamp, M.; Kúcs, T.

    2005-05-01

    We extend the POMWIG Monte Carlo generator developed by B. Cox and J. Forshaw, to include new models of central production through inclusive and exclusive double Pomeron exchange in proton-proton collisions. Double photon exchange processes are described as well, both in proton-proton and heavy-ion collisions. In all contexts, various models have been implemented, allowing for comparisons and uncertainty evaluation and enabling detailed experimental simulations. Program summaryTitle of the program:DPEMC, version 2.4 Catalogue identifier: ADVF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: any computer with the FORTRAN 77 compiler under the UNIX or Linux operating systems Operating system: UNIX; Linux Programming language used: FORTRAN 77 High speed storage required:<25 MB No. of lines in distributed program, including test data, etc.: 71 399 No. of bytes in distributed program, including test data, etc.: 639 950 Distribution format: tar.gz Nature of the physical problem: Proton diffraction at hadron colliders can manifest itself in many forms, and a variety of models exist that attempt to describe it [A. Bialas, P.V. Landshoff, Phys. Lett. B 256 (1991) 540; A. Bialas, W. Szeremeta, Phys. Lett. B 296 (1992) 191; A. Bialas, R.A. Janik, Z. Phys. C 62 (1994) 487; M. Boonekamp, R. Peschanski, C. Royon, Phys. Rev. Lett. 87 (2001) 251806; Nucl. Phys. B 669 (2003) 277; R. Enberg, G. Ingelman, A. Kissavos, N. Timneanu, Phys. Rev. Lett. 89 (2002) 081801; R. Enberg, G. Ingelman, L. Motyka, Phys. Lett. B 524 (2002) 273; R. Enberg, G. Ingelman, N. Timneanu, Phys. Rev. D 67 (2003) 011301; B. Cox, J. Forshaw, Comput. Phys. Comm. 144 (2002) 104; B. Cox, J. Forshaw, B. Heinemann, Phys. Lett. B 540 (2002) 26; V. Khoze, A. Martin, M. Ryskin, Phys. Lett. B 401 (1997) 330; Eur. Phys. J. C 14 (2000) 525; Eur. Phys. J. C 19 (2001) 477; Erratum, Eur. Phys. J. C 20 (2001) 599; Eur

  4. Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications

    NASA Astrophysics Data System (ADS)

    Li, Fusheng

    Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder

  5. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  6. Lattice Monte Carlo simulations of polymer melts

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-12-01

    We use Monte Carlo simulations to study polymer melts consisting of fully flexible and moderately stiff chains in the bond fluctuation model at a volume fraction 0.5. In order to reduce the local density fluctuations, we test a pre-packing process for the preparation of the initial configurations of the polymer melts, before the excluded volume interaction is switched on completely. This process leads to a significantly faster decrease of the number of overlapping monomers on the lattice. This is useful for simulating very large systems, where the statistical properties of the model with a marginally incomplete elimination of excluded volume violations are the same as those of the model with strictly excluded volume. We find that the internal mean square end-to-end distance for moderately stiff chains in a melt can be very well described by a freely rotating chain model with a precise estimate of the bond-bond orientational correlation between two successive bond vectors in equilibrium. The plot of the probability distributions of the reduced end-to-end distance of chains of different stiffness also shows that the data collapse is excellent and described very well by the Gaussian distribution for ideal chains. However, while our results confirm the systematic deviations between Gaussian statistics for the chain structure factor Sc(q) [minimum in the Kratky-plot] found by Wittmer et al. [EPL 77, 56003 (2007)] for fully flexible chains in a melt, we show that for the available chain length these deviations are no longer visible, when the chain stiffness is included. The mean square bond length and the compressibility estimated from collective structure factors depend slightly on the stiffness of the chains.

  7. Monte-Carlo simulation of Callisto's exosphere

    NASA Astrophysics Data System (ADS)

    Vorburger, A.; Wurz, P.; Lammer, H.; Barabash, S.; Mousis, O.

    2015-12-01

    We model Callisto's exosphere based on its ice as well as non-ice surface via the use of a Monte-Carlo exosphere model. For the ice component we implement two putative compositions that have been computed from two possible extreme formation scenarios of the satellite. One composition represents the oxidizing state and is based on the assumption that the building blocks of Callisto were formed in the protosolar nebula and the other represents the reducing state of the gas, based on the assumption that the satellite accreted from solids condensed in the jovian sub-nebula. For the non-ice component we implemented the compositions of typical CI as well as L type chondrites. Both chondrite types have been suggested to represent Callisto's non-ice composition best. As release processes we consider surface sublimation, ion sputtering and photon-stimulated desorption. Particles are followed on their individual trajectories until they either escape Callisto's gravitational attraction, return to the surface, are ionized, or are fragmented. Our density profiles show that whereas the sublimated species dominate close to the surface on the sun-lit side, their density profiles (with the exception of H and H2) decrease much more rapidly than the sputtered particles. The Neutral gas and Ion Mass (NIM) spectrometer, which is part of the Particle Environment Package (PEP), will investigate Callisto's exosphere during the JUICE mission. Our simulations show that NIM will be able to detect sublimated and sputtered particles from both the ice and non-ice surface. NIM's measured chemical composition will allow us to distinguish between different formation scenarios.

  8. Monte Carlo implementation of polarized hadronization

    NASA Astrophysics Data System (ADS)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.

    2017-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  9. Monte Carlo Volcano Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  10. Quantum Monte Carlo Endstation for Petascale Computing

    SciTech Connect

    David Ceperley

    2011-03-02

    CUDA GPU platform. We restructured the CPU algorithms to express additional parallelism, minimize GPU-CPU communication, and efficiently utilize the GPU memory hierarchy. Using mixed precision on GT200 GPUs and MPI for intercommunication and load balancing, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core Xeon CPUs alone, while reproducing the double-precision CPU results within statistical error. We developed an all-electron quantum Monte Carlo (QMC) method for solids that does not rely on pseudopotentials, and used it to construct a primary ultra-high-pressure calibration based on the equation of state of cubic boron nitride. We computed the static contribution to the free energy with the QMC method and obtained the phonon contribution from density functional theory, yielding a high-accuracy calibration up to 900 GPa usable directly in experiment. We computed the anharmonic Raman frequency shift with QMC simulations as a function of pressure and temperature, allowing optical pressure calibration. In contrast to present experimental approaches, small systematic errors in the theoretical EOS do not increase with pressure, and no extrapolation is needed. This all-electron method is applicable to first-row solids, providing a new reference for ab initio calculations of solids and benchmarks for pseudopotential accuracy. We compared experimental and theoretical results on the momentum distribution and the quasiparticle renormalization factor in sodium. From an x-ray Compton-profile measurement of the valence-electron momentum density, we derived its discontinuity at the Fermi wavevector finding an accurate measure of the renormalization factor that we compared with quantum-Monte-Carlo and G0W0 calculations performed both on crystalline sodium and on the homogeneous electron gas. Our calculated results are in good agreement with the experiment. We have been studying the heat of formation for various Kubas complexes of molecular

  11. Perturbation Monte Carlo methods for tissue structure alterations.

    PubMed

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  12. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    SciTech Connect

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  13. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  14. Linearity, resolution, and covariance in GRT inversions for anisotropic elastic moduli

    NASA Astrophysics Data System (ADS)

    Spencer, Carl; de Hoop, Maarten V.; Burridge, Robert

    1995-09-01

    This paper is concerned with the linearized inversion of elastic wave data using the Generalized Radon Transformation to give anisotropic medium parameters. Assumptions of linearity are examined by comparing linearized reflection coefficients calculated using the Born approximation with full plane-wave reflection coefficients. In typical sand/shale models we have found that the linearity assumption is valid only to approximately 60 degrees from the normal. Linear dependencies between the scattering patterns produced by individual moduli result in an ill-posed inverse problem. Utilizing P-wave data, we find that for the Transversely Isotropic case similarities in the C55 and C13 scattering directivity mean that they cannot be distinguished. C11 is best observed at wide-angle and hence estimates made using limited aperture data are subject to large error. Quasi-Monte Carlo techniques are adapted to carry out the 4D inversion integral.

  15. Top Quark Mass Measurement in the Lepton + Jets Channel Using a Matrix Element Method and \\textit{in situ} Jet Energy Calibration

    SciTech Connect

    Aaltonen, T.; Alvarez Gonzalez, B.; Amerio, S.; Amidei, D.; Anastassov, A.; Annovi, A.; Antos, J.; Apollinari, G.; Appel, J.A.; Apresyan, A.; Arisawa, T.; /Waseda U. /Dubna, JINR

    2010-10-01

    A precision measurement of the top quark mass m{sub t} is obtained using a sample of t{bar t} events from p{bar p} collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate the jet energy scale in situ. Using a total of 1087 events, a value of m{sub t} = 173.0 {+-} 1.2 GeV/c{sup 2} is measured.

  16. HAWK 2.0: A Monte Carlo program for Higgs production in vector-boson fusion and Higgs strahlung at hadron colliders

    NASA Astrophysics Data System (ADS)

    Denner, Ansgar; Dittmaier, Stefan; Kallweit, Stefan; Mück, Alexander

    2015-10-01

    The Monte Carlo integrator HAWK provides precision predictions for Higgs production at hadron colliders in vector-boson fusion and Higgs strahlung, i.e. in production processes where the Higgs boson is Attached to WeaK bosons. The fully differential predictions include the full QCD and electroweak next-to-leading-order corrections. Results are computed as integrated cross sections and as binned distributions for important hadron-collider observables.

  17. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation.

    PubMed

    Müller, Eike H; Scheichl, Rob; Shardlow, Tony

    2015-04-08

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.

  18. Improving multilevel Monte Carlo for stochastic differential equations with application to the Langevin equation

    PubMed Central

    Müller, Eike H.; Scheichl, Rob; Shardlow, Tony

    2015-01-01

    This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075

  19. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the

  20. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    SciTech Connect

    Brown, Forrest B.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  1. Coherent Scattering Imaging Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  2. Finding organic vapors - a Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  3. Influence of measurement geometry on the estimate of {sup 131}I activity in the thyroid: Monte Carlo simulation of a detector and a phantom

    SciTech Connect

    Ulanovsky, A.V.; Minenko, V.F.; Korneev, S.V.

    1997-01-01

    An approach for evaluating the influence of measurement geometry on estimates of {sup 131}I in the thyroid from measurements with survey meters was developed using Monte Carlo simulation of radiation transport in the human body and the radiation detector. The modified Monte Carlo code, EGS4, including a newly developed mathematical model of detector, thyroid gland, and neck, was used for the computations. The approach was tested by comparing calculated and measured differential and integral detector characteristics. This procedure was applied to estimate uncertainties in direct thyroid-measurement results due to geometrical errors. 14 refs., 11 figs., 4 tabs.

  4. Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations

    SciTech Connect

    Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C

    2011-01-01

    It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.

  5. The Monte Carlo code MCPTV--Monte Carlo dose calculation in radiation therapy with carbon ions.

    PubMed

    Karg, Juergen; Speer, Stefan; Schmidt, Manfred; Mueller, Reinhold

    2010-07-07

    The Monte Carlo code MCPTV is presented. MCPTV is designed for dose calculation in treatment planning in radiation therapy with particles and especially carbon ions. MCPTV has a voxel-based concept and can perform a fast calculation of the dose distribution on patient CT data. Material and density information from CT are taken into account. Electromagnetic and nuclear interactions are implemented. Furthermore the algorithm gives information about the particle spectra and the energy deposition in each voxel. This can be used to calculate the relative biological effectiveness (RBE) for each voxel. Depth dose distributions are compared to experimental data giving good agreement. A clinical example is shown to demonstrate the capabilities of the MCPTV dose calculation.

  6. Gauge Integration

    DTIC Science & Technology

    2002-09-01

    convergence theorems. Lebesgue developed his theory of measure and integration to address these shortcomings. His integral is more powerful in the...This relatively recent integral possesses the intuitive description of the Riemann integral, with the power of the Lebesgue integral. The purpose of this...strong convergence theorems. Lebesgue developed his theory of measure and integration to address these shortcomings. His integral is more powerful in the

  7. Integrated Means Integrity

    ERIC Educational Resources Information Center

    Odegard, John D.

    1978-01-01

    Describes the operation of the Cessna Pilot Center (CPC) flight training systems. The program is based on a series of integrated activities involving stimulus, response, reinforcement and association components. Results show that the program can significantly reduce in-flight training time. (CP)

  8. An unbiased Hessian representation for Monte Carlo PDFs.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

  9. Frequency domain optical tomography using a Monte Carlo perturbation method

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  10. Monte Carlo dose calculations in advanced radiotherapy

    NASA Astrophysics Data System (ADS)

    Bush, Karl Kenneth

    The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of

  11. Monte Carlo studies of model Langmuir monolayers.

    PubMed

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  12. Monte Carlo studies of model Langmuir monolayers

    NASA Astrophysics Data System (ADS)

    Opps, S. B.; Yang, B.; Gray, C. G.; Sullivan, D. E.

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters σhh and σtt, respectively. The tails consist of nt~4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with σhh=σtt, we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'2/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in Tc with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in Tc due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard surface, whereby the surfactants are only

  13. NOTE: A Monte Carlo study of dose rate distribution around the specially asymmetric CSM3-a 137Cs source

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Lliso, F.; Ballester, F.; Serrano, M. A.; Lluch, J. L.; Limami, Y.; Puchades, V.; Casal, E.

    2001-07-01

    The CSM3 137Cs type stainless-steel encapsulated source is widely used in manually afterloaded low dose rate brachytherapy. A specially asymmetric source, CSM3-a, has been designed by CIS Bio International (France) substituting the eyelet side seed with an inactive material in the CSM3 source. This modification has been done in order to allow a uniform dose level over the upper vaginal surface when this `linear' source is inserted at the top of the dome vaginal applicators. In this study the Monte Carlo GEANT3 simulation code, incorporating the source geometry in detail, was used to investigate the dosimetric characteristics of this special CSM3-a 137Cs brachytherapy source. The absolute dose rate distribution in water around this source was calculated and is presented in the form of an along-away table. Comparison of Sievert integral type calculations with Monte Carlo results are discussed.

  14. A Monte Carlo simulation based two-stage adaptive resonance theory mapping approach for offshore oil spill vulnerability index classification.

    PubMed

    Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth

    2014-09-15

    In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response.

  15. Quantum Monte Carlo Studies of Dense Hydrogen and Two-Dimensional Bose Liquids.

    NASA Astrophysics Data System (ADS)

    Magro, William R.

    Quantum Monte Carlo techniques; in their various incarnations, calculate ground state or finite temperature properties of many-body quantum systems. We apply the path-integral Monte Carlo method to hydrogen at densities and temperatures in the regime of cooperative thermal and pressure dissociation, relevant to structural models of the giant planets' interiors. We treat the protons and electrons as quantum particles, thereby avoiding the Born -Oppenheimer approximation. Fermi-Dirac exchange statistics are treated within the fixed-node approximation, with the nodes specified by the free Fermi gas. In the region of molecular dissociation, we observe properties consistent with and suggestive of a first order phase transition with positive density discontinuity (n_{ rm H2}Carlo techniques to study the ground state properties of two distinct, but related, two-dimensional systems: the Bose Yukawa liquid and the Bose Coulomb liquid. The Yukawa system is a model for flux line interactions in high temperature superconductors. We determine the phase diagram as a function of mass and density and find a high density scaling relation describing the crossover to Coulomb behavior. We apply our results to a sample superconducting compound, rm Bi_2Sr_2CaCu_2O _8. Next the results of the Coulomb system are presented. We show that the predominance of long wavelength plasmons destroys Bose condensation in this system. The ground state of this system is closely related to the bosonic representation of Laughlin's wave function for the fractional quantum Hall system.

  16. A Monte Carlo tool for raster-scanning particle therapy dose computation

    NASA Astrophysics Data System (ADS)

    Jelen, U.; Radon, M.; Santiago, A.; Wittig, A.; Ammazzalorso, F.

    2014-03-01

    Purpose of this work was to implement Monte Carlo (MC) dose computation in realistic patient geometries with raster-scanning, the most advanced ion beam delivery technique, combining magnetic beam deflection with energy variation. FLUKA, a Monte Carlo package well-established in particle therapy applications, was extended to simulate raster-scanning delivery with clinical data, unavailable as built-in feature. A new complex beam source, compatible with FLUKA public programming interface, was implemented in Fortran to model the specific properties of raster-scanning, i.e. delivery by means of multiple spot sources with variable spatial distributions, energies and numbers of particles. The source was plugged into the MC engine through the user hook system provided by FLUKA. Additionally, routines were provided to populate the beam source with treatment plan data, stored as DICOM RTPlan or TRiP98's RST format, enabling MC recomputation of clinical plans. Finally, facilities were integrated to read computerised tomography (CT) data into FLUKA. The tool was used to recompute two representative carbon ion treatment plans, a skull base and a prostate case, prepared with analytical dose calculation (TRiP98). Selected, clinically relevant issues influencing the dose distributions were investigated: (1) presence of positioning errors, (2) influence of fiducial markers and (3) variations in pencil beam width. Notable differences in modelling of these challenging situations were observed between the analytical and Monte Carlo results. In conclusion, a tool was developed, to support particle therapy research and treatment, when high precision MC calculations are required, e.g. in presence of severe density heterogeneities or in quality assurance procedures.

  17. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  18. A Monte Carlo investigation of the Hamiltonian mean field model

    NASA Astrophysics Data System (ADS)

    Pluchino, Alessandro; Andronico, Giuseppe; Rapisarda, Andrea

    2005-04-01

    We present a Monte Carlo numerical investigation of the Hamiltonian mean field (HMF) model. We begin by discussing canonical Metropolis Monte Carlo calculations, in order to check the caloric curve of the HMF model and study finite size effects. In the second part of the paper, we present numerical simulations obtained by means of a modified Monte Carlo procedure with the aim to test the stability of those states at minimum temperature and zero magnetization (homogeneous Quasi stationary states), which exist in the condensed phase of the model just below the critical point. For energy densities smaller than the limiting value U∼0.68, we find that these states are unstable confirming a recent result on the Vlasov stability analysis applied to the HMF model.

  19. Monte Carlo simulation in statistical physics: an introduction

    NASA Astrophysics Data System (ADS)

    Binder, K., Heermann, D. W.

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. This fourth edition has been updated and a new chapter on Monte Carlo simulation of quantum-mechanical problems has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was the winner of the Berni J. Alder CECAM Award for Computational Physics 2001.

  20. Monte Carlo simulation of laser attenuation characteristics in fog

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi

    2011-06-01

    Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.

  1. Classical Perturbation Theory for Monte Carlo Studies of System Reliability

    SciTech Connect

    Lewins, Jeffrey D.

    2001-03-15

    A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.

  2. BACKWARD AND FORWARD MONTE CARLO METHOD IN POLARIZED RADIATIVE TRANSFER

    SciTech Connect

    Yong, Huang; Guo-Dong, Shi; Ke-Yong, Zhu

    2016-03-20

    In general, the Stocks vector cannot be calculated in reverse in the vector radiative transfer. This paper presents a novel backward and forward Monte Carlo simulation strategy to study the vector radiative transfer in the participated medium. A backward Monte Carlo process is used to calculate the ray trajectory and the endpoint of the ray. The Stocks vector is carried out by a forward Monte Carlo process. A one-dimensional graded index semi-transparent medium was presented as the physical model and the thermal emission consideration of polarization was studied in the medium. The solution process to non-scattering, isotropic scattering, and the anisotropic scattering medium, respectively, is discussed. The influence of the optical thickness and albedo on the Stocks vector are studied. The results show that the U, V-components of the apparent Stocks vector are very small, but the Q-component of the apparent Stocks vector is relatively larger, which cannot be ignored.

  3. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  4. Monte Carlo tests of the ELIPGRID-PC algorithm

    SciTech Connect

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  5. Photon beam description in PEREGRINE for Monte Carlo dose calculations

    SciTech Connect

    Cox, L. J., LLNL

    1997-03-04

    Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.

  6. Implementation of Monte Carlo Simulations for the Gamma Knife System

    NASA Astrophysics Data System (ADS)

    Xiong, W.; Huang, D.; Lee, L.; Feng, J.; Morris, K.; Calugaru, E.; Burman, C.; Li, J.; Ma, C.-M.

    2007-06-01

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 ± 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  7. The INTEGRAL experiment

    NASA Astrophysics Data System (ADS)

    Lavigne, J. M.; Jean, P.; Kandel, B.; Borrel, V.; Roques, J. P.; Lichti, G.; Schönfelder, V.; Diehl, R.; Georgii, R.; Kirchner, T.; Durouchoux, Ph.; Cordier, B.; Diallo, N.; Sanchez, F.; Payne, B.; Leleux, P.; Caraveo, P.; Teegarden, B.; Matteson, J.; Slassi-Sennou, S.; Skinner, G.; Connell, P.

    1998-01-01

    The International Gamma-ray Astrophysics Laboratory (INTEGRAL) is conceived as the next logical step in gamma-ray astronomy after the US Compton Gamma-Ray Observatory (CGRO) and the French/Russian SIGMA mission. The INTEGRAL scientific payload consists of two main instruments (Imager and Spectrometer) and two monitor instruments (X-Ray Monitor and Optical Transient Camera). The INTEGRAL spectrometer ``SPI'' is optimized for detailed measurements of gamma-ray lines and mapping of diffuse sources. It combines a coded aperture mask with an array of large volume, high-purity germanium detectors. The detectors make precise measurements of the y-ray energies over the 20 keV-8 MeV energy range. This paper presents the instrument characteristics; these properties have been evaluated by means of Monte Carlo calculations. With the characteristic features it will be possible to study gamma-ray emission from compact objects or line profiles with a high-energy resolution and a good angular resolution.

  8. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  9. A review of best practices for Monte Carlo criticality calculations

    SciTech Connect

    Brown, Forrest B

    2009-01-01

    Monte Carlo methods have been used to compute k{sub eff} and the fundamental mode eigenfunction of critical systems since the 1950s. While such calculations have become routine using standard codes such as MCNP and SCALE/KENO, there still remain 3 concerns that must be addressed to perform calculations correctly: convergence of k{sub eff} and the fission distribution, bias in k{sub eff} and tally results, and bias in statistics on tally results. This paper provides a review of the fundamental problems inherent in Monte Carlo criticality calculations. To provide guidance to practitioners, suggested best practices for avoiding these problems are discussed and illustrated by examples.

  10. Monte Carlo Simulations of Phosphate Polyhedron Connectivity in Glasses

    SciTech Connect

    ALAM,TODD M.

    1999-12-21

    Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.

  11. PEPSI — a Monte Carlo generator for polarized leptoproduction

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  12. A Monte Carlo method for combined segregation and linkage analysis

    SciTech Connect

    Guo, S.W. ); Thompson, E.A. )

    1992-11-01

    The authors introduce a Monte Carlo approach to combined segregation and linkage analysis of a quantitative trait observed in an extended pedigree. In conjunction with the Monte Carlo method of likelihood-ratio evaluation proposed by Thompson and Guo, the method provides for estimation and hypothesis testing. The greatest attraction of this approach is its ability to handle complex genetic models and large pedigrees. Two examples illustrate the practicality of the method. One is of simulated data on a large pedigree; the other is a reanalysis of published data previously analyzed by other methods. 40 refs, 5 figs., 5 tabs.

  13. Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.

    PubMed

    Hinrichs, A; Reich, T

    2001-01-01

    We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).

  14. Complexity of Monte Carlo and deterministic dose-calculation methods.

    PubMed

    Börgers, C

    1998-03-01

    Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.

  15. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods

  16. Parton distribution functions in Monte Carlo factorisation scheme

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

  17. Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.

    2014-03-01

    Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.

  18. Kinetic Monte Carlo method applied to nucleic acid hairpin folding.

    PubMed

    Sauerwine, Ben; Widom, Michael

    2011-12-01

    Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.

  19. Trail-Needs pseudopotentials in quantum Monte Carlo calculations with plane-wave/blip basis sets

    NASA Astrophysics Data System (ADS)

    Drummond, N. D.; Trail, J. R.; Needs, R. J.

    2016-10-01

    We report a systematic analysis of the performance of a widely used set of Dirac-Fock pseudopotentials for quantum Monte Carlo (QMC) calculations. We study each atom in the periodic table from hydrogen (Z =1 ) to mercury (Z =80 ), with the exception of the 4 f elements (57 ≤Z ≤70 ). We demonstrate that ghost states are a potentially serious problem when plane-wave basis sets are used in density functional theory (DFT) orbital-generation calculations, but that this problem can be almost entirely eliminated by choosing the s channel to be local in the DFT calculation; the d channel can then be chosen to be local in subsequent QMC calculations, which generally leads to more accurate results. We investigate the achievable energy variance per electron with different levels of trial wave function and we determine appropriate plane-wave cutoff energies for DFT calculations for each pseudopotential. We demonstrate that the so-called "T-move" scheme in diffusion Monte Carlo is essential for many elements. We investigate the optimal choice of spherical integration rule for pseudopotential projectors in QMC calculations. The information reported here will prove crucial in the planning and execution of QMC projects involving beyond-first-row elements.

  20. GORRAM: Introducing accurate operational-speed radiative transfer Monte Carlo solvers

    NASA Astrophysics Data System (ADS)

    Buras-Schnell, Robert; Schnell, Franziska; Buras, Allan

    2016-06-01

    We present a new approach for solving the radiative transfer equation in horizontally homogeneous atmospheres. The motivation was to develop a fast yet accurate radiative transfer solver to be used in operational retrieval algorithms for next generation meteorological satellites. The core component is the program GORRAM (Generator Of Really Rapid Accurate Monte-Carlo) which generates solvers individually optimized for the intended task. These solvers consist of a Monte Carlo model capable of path recycling and a representative set of photon paths. Latter is generated using the simulated annealing technique. GORRAM automatically takes advantage of limitations on the variability of the atmosphere. Due to this optimization the number of photon paths necessary for accurate results can be reduced by several orders of magnitude. For the shown example of a forward model intended for an aerosol satellite retrieval, comparison with an exact yet slow solver shows that a precision of better than 1% can be achieved with only 36 photons. The computational time is at least an order of magnitude faster than any other type of radiative transfer solver. Merely the lookup table approach often used in satellite retrieval is faster, but on the other hand suffers from limited accuracy. This makes GORRAM-generated solvers an eligible candidate as forward model in operational-speed retrieval algorithms and data assimilation applications. GORRAM also has the potential to create fast solvers of other integrable equations.

  1. Monte Carlo calculations of adsorbate placement and thermodynamics in a micropore: Xe in NaA

    NASA Astrophysics Data System (ADS)

    van Tassel, P. R.; Davis, H. T.; McCormick, A. V.

    The canonical ensemble Monte Carlo technique is used to calculate thermodynamic properties and density distributions of Xe atoms trapped in the alpha cage of zeolite NaA, which is modelled as discrete atoms (or ions) positioned on a truncated cuboctahedron. The addition of Xe atoms to the cage causes a decrease in the calculated potential energy up to a preferred loading. Beyond this, though, further loading becomes energetically unfavourable. The angle averaged density distribution of Xe in the cage exhibits a maximum between the centre of the cage and the wall; both the position and the intensity of this maximum depend strongly on the loading. Detailed examination reveals that localized density maxima exist at discrete points within the cage and that these points move as Xe loading changes. Both the preferred Xe loading and the shape of the Xe density distribution depend strongly on the Si/Al ratio, since this parameter determines the number of charge balancing cations present. For high Si/Al ratios, the angle averaged Xe density distribution can display multiple peaks. The accuracy of the Monte Carlo simulation is demonstrated by comparing the resulting potential energy and density distribution with the numerically integrated solution for the case of one Xe per cage. The numerically computed heat of adsorption for single particle loading compares favourably with experimental values following parameter scaling. The relation of these results with both previously published Xe NMR measurements and with the mobility of adsorbates in zeolites are also explored.

  2. Wavelet-Monte Carlo Hybrid System for HLW Nuclide Migration Modeling and Sensitivity and Uncertainty Analysis

    SciTech Connect

    Nasif, Hesham; Neyama, Atsushi

    2003-02-26

    This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).

  3. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  4. Oversight and Development of a Community Monte Carlo Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Under this grant we have developed a Monte Carlo radiative transfer code that will act as the nucleus for the 13RC Community Monte Carlo Model. All code is written in ANSI-compliant Fortran-95. Many modules define public type and procedures to manipulate them, but do not allow access to the type's internal components. This allows each module to do its own exhaustive error checking up-front, then proceed in a streamlined way. Many modules can read and write the state of their objects to persistent files. The code has been tested on a Macintosh running OS 10.2.4 and the Absoft Fortran compiler, and on Sun UltraSparcs running Solaris 5.8 and Forte V8 compilers. The code exposed bugs in the Intel Fortran Compiler (ifc) on the 13RC Linux host, and we are waiting for a resolution of these bugs before finished the port. the code base is under CVS versions control. The code base consists of the core code (nine modules providing the infrastructure), example integrators, and a suite of utilities and examples.

  5. Quantum annealing of an Ising spin-glass by Green's function Monte Carlo.

    PubMed

    Stella, Lorenzo; Santoro, Giuseppe E

    2007-03-01

    We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.

  6. Monte carlo simulation of an X-ray pixel beam microirradiation system.

    PubMed

    Schreiber, E C; Chang, S X

    2009-03-01

    Monte Carlo simulations are used in the development of a nanotechnology-based multi-pixel beam array small animal microirradiation system. The microirradiation system uses carbon nanotube field emission technology to generate arrays of individually controllable X-ray pixel beams that electronically form irregular irradiation fields having intensity and temporal modulation without any mechanical motion. The microirradiation system, once developed, will be incorporated with the micro-CT system already developed that is based on the same nanotechnology to form an integrated image-guided and intensity-modulated microirradiation system for high-temporal-resolution small animal research. Prospective microirradiation designs were evaluated based on dosimetry calculated using EGSnrc-based Monte Carlo simulations. Design aspects studied included X-ray anode design, collimator design, and dosimetric considerations such as beam energy, dose rate, inhomogeneity correction, and the microirradiation treatment planning strategies. The dosimetric properties of beam energies between 80-400 kVp with varying filtration were studied, producing a pixel beam dose rate per current of 0.35-13 Gy per min per mA at the microirradiation isocenter. Using opposing multi-pixel-beam array pairs reduces the dose inhomogeneity between adjacent pixel beams to negligible levels near the isocenter and 20% near the mouse surface.

  7. A kinetic theory for nonanalog Monte Carlo algorithms: Exponential transform with angular biasing

    SciTech Connect

    Ueki, T.; Larsen, E.W.

    1998-11-01

    A new Boltzmann Monte Carlo (BMC) equation is proposed to describe the transport of Monte Carlo particles governed by a set of nonanalog rules for the transition of space, velocity, and weight. The BMC equation is a kinetic equation that includes weight as an extra independent variable. The solution of the BMC equation is the pointwise distribution of velocity and weight throughout the physical system. The BMC equation is derived for the simulation of a transmitted current, utilizing the exponential transform with angular biasing. The weight moments of the solution of the BMC equation are used to predict the score moments of the transmission current. (Also, it is shown that an adjoint BMC equation can be used for this purpose.) Integrating the solution of the forward BMC equation over space, velocity, and weight, the mean number of flights per history is obtained. This is used to determine theoretically the figure of merit for any choice of biasing parameters. Also, a maximum safe value of the exponential transform parameter is proposed, which ensure the finite variance of variance estimate (sample variance) for any penetration distance. Finally, numerical results that validate the new theory are provided.

  8. Kinetic Monte Carlo Simulation of Oxygen and Cation Diffusion in Yttria-Stabilized Zirconia

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2011-01-01

    Yttria-stabilized zirconia (YSZ) is of interest to the aerospace community, notably for its application as a thermal barrier coating for turbine engine components. In such an application, diffusion of both oxygen ions and cations is of concern. Oxygen diffusion can lead to deterioration of a coated part, and often necessitates an environmental barrier coating. Cation diffusion in YSZ is much slower than oxygen diffusion. However, such diffusion is a mechanism by which creep takes place, potentially affecting the mechanical integrity and phase stability of the coating. In other applications, the high oxygen diffusivity of YSZ is useful, and makes the material of interest for use as a solid-state electrolyte in fuel cells. The kinetic Monte Carlo (kMC) method offers a number of advantages compared with the more widely known molecular dynamics simulation method. In particular, kMC is much more efficient for the study of processes, such as diffusion, that involve infrequent events. We describe the results of kinetic Monte Carlo computer simulations of oxygen and cation diffusion in YSZ. Using diffusive energy barriers from ab initio calculations and from the literature, we present results on the temperature dependence of oxygen and cation diffusivity, and on the dependence of the diffusivities on yttria concentration and oxygen sublattice vacancy concentration. We also present results of the effect on diffusivity of oxygen vacancies in the vicinity of the barrier cations that determine the oxygen diffusion energy barriers.

  9. Distributed processor Monte Carlo: MCNP results on a 16-node IBM cluster

    SciTech Connect

    McKinney, G.W.

    1993-05-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. Although there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization. Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S(f, P) = 1/(1 {minus} f + f /P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.

  10. Distributed processor Monte Carlo: MCNP results on a 16-node IBM cluster

    SciTech Connect

    McKinney, G.W.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. Although there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization. Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S(f, P) = 1/(1 [minus] f + f /P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.

  11. Diffusion microscopist simulator: a general Monte Carlo simulation system for diffusion magnetic resonance imaging.

    PubMed

    Yeh, Chun-Hung; Schmitt, Benoît; Le Bihan, Denis; Li-Schlittgen, Jing-Rebecca; Lin, Ching-Po; Poupon, Cyril

    2013-01-01

    This article describes the development and application of an integrated, generalized, and efficient Monte Carlo simulation system for diffusion magnetic resonance imaging (dMRI), named Diffusion Microscopist Simulator (DMS). DMS comprises a random walk Monte Carlo simulator and an MR image synthesizer. The former has the capacity to perform large-scale simulations of Brownian dynamics in the virtual environments of neural tissues at various levels of complexity, and the latter is flexible enough to synthesize dMRI datasets from a variety of simulated MRI pulse sequences. The aims of DMS are to give insights into the link between the fundamental diffusion process in biological tissues and the features observed in dMRI, as well as to provide appropriate ground-truth information for the development, optimization, and validation of dMRI acquisition schemes for different applications. The validity, efficiency, and potential applications of DMS are evaluated through four benchmark experiments, including the simulated dMRI of white matter fibers, the multiple scattering diffusion imaging, the biophysical modeling of polar cell membranes, and the high angular resolution diffusion imaging and fiber tractography of complex fiber configurations. We expect that this novel software tool would be substantially advantageous to clarify the interrelationship between dMRI and the microscopic characteristics of brain tissues, and to advance the biophysical modeling and the dMRI methodologies.

  12. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  13. Path-integral approach to lattice polarons

    NASA Astrophysics Data System (ADS)

    Kornilovitch, P. E.

    2007-06-01

    The basic principles behind a path integral approach to the lattice polaron are reviewed. Analytical integration of phonons reduces the problem to one self-interacting imaginary-time path, which is then simulated by Metropolis Monte Carlo. Projection operators separate states of different symmetry, which provides access to various excited states. Shifted boundary conditions in imaginary time enable calculation of the polaron mass, spectrum and density of states. Other polaron characteristics accessible by the method include the polaron energy, number of excited phonons and isotope exponent on mass. Monte Carlo updates are formulated in continuous imaginary time on infinite lattices and as such provide statistically unbiased results without finite-size and finite time-step errors. Numerical data are presented for models with short-range and long-range electron-phonon interactions.

  14. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms

    NASA Astrophysics Data System (ADS)

    Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian

    2016-03-01

    In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes.

  15. Integrating Art.

    ERIC Educational Resources Information Center

    BCATA Journal for Art Teachers, 1991

    1991-01-01

    These articles focus on art as a component of interdisciplinary integration. (1) "Integrated Curriculum and the Visual Arts" (Anna Kindler) considers various aspects of integration and implications for art education. (2) "Integration: The New Literacy" (Tim Varro) illustrates how the use of technology can facilitate…

  16. Monte Carlo Analysis of Quantum Transport and Fluctuations in Semiconductors.

    DTIC Science & Technology

    1986-02-18

    methods to quantum transport within the Liouville formulation. The second part concerns with fluctuations of carrier velocities and energies both in...interactions) on the transport properties. Keywords: Monte Carlo; Charge Transport; Quantum Transport ; Fluctuations; Semiconductor Physics; Master Equation...The present report contains technical matter related to the research performed on two different subjects. The first part concerns with quantum

  17. Monte Carlo simulation by computer for life-cycle costing

    NASA Technical Reports Server (NTRS)

    Gralow, F. H.; Larson, W. J.

    1969-01-01

    Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.

  18. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    EPA Science Inventory

    A predictive screening model was developed for fate and transport
    of viruses in the unsaturated zone. A database of input parameters
    allowed Monte Carlo analysis with the model. The resulting kernel
    densities of predicted attenuation during percolation indicated very ...

  19. The Metropolis Monte Carlo Method in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2003-11-01

    A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.

  20. Quantum Monte Carlo simulation of topological phase transitions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Arata; Kimura, Taro

    2016-12-01

    We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.

  1. The Use of Monte Carlo Techniques to Teach Probability.

    ERIC Educational Resources Information Center

    Newell, G. J.; MacFarlane, J. D.

    1985-01-01

    Presents sports-oriented examples (cricket and football) in which Monte Carlo methods are used on microcomputers to teach probability concepts. Both examples include computer programs (with listings) which utilize the microcomputer's random number generator. Instructional strategies, with further challenges to help students understand the role of…

  2. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  3. Diffuse photon density wave measurements and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kuzmin, Vladimir L.; Neidrauer, Michael T.; Diaz, David; Zubkov, Leonid A.

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe-Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source-detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal-noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source-detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  4. Calculating coherent pair production with Monte Carlo methods

    SciTech Connect

    Bottcher, C.; Strayer, M.R.

    1989-01-01

    We discuss calculations of the coherent electromagnetic pair production in ultra-relativistic hadron collisions. This type of production, in lowest order, is obtained from three diagrams which contain two virtual photons. We discuss simple Monte Carlo methods for evaluating these classes of diagrams without recourse to involved algebraic reduction schemes. 19 refs., 11 figs.

  5. A Monte Carlo simulation of a supersaturated sodium chloride solution

    NASA Astrophysics Data System (ADS)

    Schwendinger, Michael G.; Rode, Bernd M.

    1989-03-01

    A simulation of a supersaturated sodium chloride solution with the Monte Carlo statistical thermodynamic method is reported. The water-water interactions are described by the Matsuoka-Clementi-Yoshimine (MCY) potential, while the ion-water potentials have been derived from ab initio calculations. Structural features of the solution have been evaluated, special interest being focused on possible precursors of nucleation.

  6. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  7. Monte Carlo capabilities of the SCALE code system

    SciTech Connect

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; Bekar, Kursat B.; Wiarda, Dorothea; Celik, Cihangir; Perfetti, Christopher M.; Ibrahim, Ahmad M.; Hart, S. W. D.; Dunn, Michael E.; Marshall, William J.

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  8. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  9. Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.

    PubMed

    Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo

    2014-10-14

    Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.

  10. Play It Again: Teaching Statistics with Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  11. Observations on variational and projector Monte Carlo methods.

    PubMed

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  12. Automated Monte Carlo Simulation of Proton Therapy Treatment Plans.

    PubMed

    Verburg, Joost Mathijs; Grassberger, Clemens; Dowdell, Stephen; Schuemann, Jan; Seco, Joao; Paganetti, Harald

    2016-12-01

    Simulations of clinical proton radiotherapy treatment plans using general purpose Monte Carlo codes have been proven to be a valuable tool for basic research and clinical studies. They have been used to benchmark dose calculation methods, to study radiobiological effects, and to develop new technologies such as in vivo range verification methods. Advancements in the availability of computational power have made it feasible to perform such simulations on large sets of patient data, resulting in a need for automated and consistent simulations. A framework called MCAUTO was developed for this purpose. Both passive scattering and pencil beam scanning delivery are supported. The code handles the data exchange between the treatment planning system and the Monte Carlo system, which requires not only transfer of plan and imaging information but also translation of institutional procedures, such as output factor definitions. Simulations are performed on a high-performance computing infrastructure. The simulation methods were designed to use the full capabilities of Monte Carlo physics models, while also ensuring consistency in the approximations that are common to both pencil beam and Monte Carlo dose calculations. Although some methods need to be tailored to institutional planning systems and procedures, the described procedures show a general road map that can be easily translated to other systems.

  13. Monte Carlo method for magnetic impurities in metals

    NASA Technical Reports Server (NTRS)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  14. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    SciTech Connect

    Trahan, Travis John

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  15. Parallel Monte Carlo simulation of multilattice thin film growth

    NASA Astrophysics Data System (ADS)

    Shu, J. W.; Lu, Qin; Wong, Wai-on; Huang, Han-chen

    2001-07-01

    This paper describe a new parallel algorithm for the multi-lattice Monte Carlo atomistic simulator for thin film deposition (ADEPT), implemented on parallel computer using the PVM (Parallel Virtual Machine) message passing library. This parallel algorithm is based on domain decomposition with overlapping and asynchronous communication. Multiple lattices are represented by a single reference lattice through one-to-one mappings, with resulting computational demands being comparable to those in the single-lattice Monte Carlo model. Asynchronous communication and domain overlapping techniques are used to reduce the waiting time and communication time among parallel processors. Results show that the algorithm is highly efficient with large number of processors. The algorithm was implemented on a parallel machine with 50 processors, and it is suitable for parallel Monte Carlo simulation of thin film growth with either a distributed memory parallel computer or a shared memory machine with message passing libraries. In this paper, the significant communication time in parallel MC simulation of thin film growth is effectively reduced by adopting domain decomposition with overlapping between sub-domains and asynchronous communication among processors. The overhead of communication does not increase evidently and speedup shows an ascending tendency when the number of processor increases. A near linear increase in computing speed was achieved with number of processors increases and there is no theoretical limit on the number of processors to be used. The techniques developed in this work are also suitable for the implementation of the Monte Carlo code on other parallel systems.

  16. Monte Carlo study of the atmospheric spread function

    NASA Technical Reports Server (NTRS)

    Pearce, W. A.

    1986-01-01

    Monte Carlo radiative transfer simulations are used to study the atmospheric spread function appropriate to satellite-based sensing of the earth's surface. The parameters which are explored include the nadir angle of view, the size distribution of the atmospheric aerosol, and the aerosol vertical profile.

  17. Diffuse photon density wave measurements and Monte Carlo simulations.

    PubMed

    Kuzmin, Vladimir L; Neidrauer, Michael T; Diaz, David; Zubkov, Leonid A

    2015-10-01

    Diffuse photon density wave (DPDW) methodology is widely used in a number of biomedical applications. Here, we present results of Monte Carlo simulations that employ an effective numerical procedure based upon a description of radiative transfer in terms of the Bethe–Salpeter equation. A multifrequency noncontact DPDW system was used to measure aqueous solutions of intralipid at a wide range of source–detector separation distances, at which the diffusion approximation of the radiative transfer equation is generally considered to be invalid. We find that the signal–noise ratio is larger for the considered algorithm in comparison with the conventional Monte Carlo approach. Experimental data are compared to the Monte Carlo simulations using several values of scattering anisotropy and to the diffusion approximation. Both the Monte Carlo simulations and diffusion approximation were in very good agreement with the experimental data for a wide range of source–detector separations. In addition, measurements with different wavelengths were performed to estimate the size and scattering anisotropy of scatterers.

  18. Monte Carlo Approach for Reliability Estimations in Generalizability Studies.

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  19. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  20. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  1. Monte Carlo Estimation of the Electric Field in Stellarators

    NASA Astrophysics Data System (ADS)

    Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.

    1986-10-01

    The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable.

  2. Impact of random numbers on parallel Monte Carlo application

    SciTech Connect

    Pandey, Ras B.

    2002-10-22

    A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

  3. A Conversation with Native Flutist R. Carlos Nakai.

    ERIC Educational Resources Information Center

    Simonelli, Richard

    1992-01-01

    R. Carlos Nakai discusses his personal development as a musician, his interest in keeping the Native flute tradition alive in a modern way, his ethic of service, the purpose of higher education for Indian students, the relation of education to life, and the role of Indian people in the sciences. (SV)

  4. SABRINA: an interactive solid geometry modeling program for Monte Carlo

    SciTech Connect

    West, J.T.

    1985-01-01

    SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.

  5. A Monte Carlo photocurrent/photoemission computer program

    NASA Technical Reports Server (NTRS)

    Chadsey, W. L.; Ragona, C.

    1972-01-01

    A Monte Carlo computer program was developed for the computation of photocurrents and photoemission in gamma (X-ray)-irradiated materials. The program was used for computation of radiation-induced surface currents on space vehicles and the computation of radiation-induced space charge environments within space vehicles.

  6. Dark Photon Monte Carlo at SeaQuest

    NASA Astrophysics Data System (ADS)

    Hicks, Caleb; SeaQuest/E906 Collaboration

    2016-09-01

    Fermi National Laboratory's E906/SeaQuest is an experiment primarily designed to study the ratio of anti-down to anti-up quarks in the nucleon quark sea as a function of Bjorken x. SeaQuest's measurement is obtained by measuring the muon pairs produced by the Drell-Yan process. The experiment can also search for muon pair vertices past the target and beam dump, which would be a signature of Dark Photon decay. It is therefore necessary to run Monte Carlo simulations to determine how a changed Z vertex affects the detection and distribution of muon pairs using SeaQuest's detectors. SeaQuest has an existing Monte Carlo program that has been used for simulations of the Drell-Yan process as well as J/psi decay and other processes. The Monte Carlo program was modified to use a fixed Z vertex when generating muon pairs. Events were then generated with varying Z vertices and the resulting simulations were then analyzed. This work is focuses on the results of the Monte Carlo simulations and the effects on Dark Photon detection. This research was supported by US DOE MENP Grant DE-FG02-03ER41243.

  7. Parallel canonical Monte Carlo simulations through sequential updating of particles

    NASA Astrophysics Data System (ADS)

    O'Keeffe, C. J.; Orkoulas, G.

    2009-04-01

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  8. Parallel canonical Monte Carlo simulations through sequential updating of particles.

    PubMed

    O'Keeffe, C J; Orkoulas, G

    2009-04-07

    In canonical Monte Carlo simulations, sequential updating of particles is equivalent to random updating due to particle indistinguishability. In contrast, in grand canonical Monte Carlo simulations, sequential implementation of the particle transfer steps in a dense grid of distinct points in space improves both the serial and the parallel efficiency of the simulation. The main advantage of sequential updating in parallel canonical Monte Carlo simulations is the reduction in interprocessor communication, which is usually a slow process. In this work, we propose a parallelization method for canonical Monte Carlo simulations via domain decomposition techniques and sequential updating of particles. Each domain is further divided into a middle and two outer sections. Information exchange is required after the completion of the updating of the outer regions. During the updating of the middle section, communication does not occur unless a particle moves out of this section. Results on two- and three-dimensional Lennard-Jones fluids indicate a nearly perfect improvement in parallel efficiency for large systems.

  9. A Variational Monte Carlo Approach to Atomic Structure

    ERIC Educational Resources Information Center

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  10. Determining MTF of digital detector system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  11. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  12. Monte Carlo Capabilities of the SCALE Code System

    NASA Astrophysics Data System (ADS)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  13. Harnessing graphical structure in Markov chain Monte Carlo learning

    SciTech Connect

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  14. Quantitative molecular thermochemistry based on path integrals.

    PubMed

    Glaesemann, Kurt R; Fried, Laurence E

    2005-07-15

    The calculation of thermochemical data requires accurate molecular energies and heat capacities. Traditional methods rely upon the standard harmonic normal-mode analysis to calculate the vibrational and rotational contributions. We utilize path-integral Monte Carlo for going beyond the harmonic analysis and to calculate the vibrational and rotational contributions to ab initio energies. This is an application and an extension of a method previously developed in our group [J. Chem. Phys. 118, 1596 (2003)].

  15. Topics in structural dynamics: Nonlinear unsteady transonic flows and Monte Carlo methods in acoustics

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1974-01-01

    The results are reported of two unrelated studies. The first was an investigation of the formulation of the equations for non-uniform unsteady flows, by perturbation of an irrotational flow to obtain the linear Green's equation. The resulting integral equation was found to contain a kernel which could be expressed as the solution of the adjoint flow equation, a linear equation for small perturbations, but with non-constant coefficients determined by the steady flow conditions. It is believed that the non-uniform flow effects may prove important in transonic flutter, and that in such cases, the use of doublet type solutions of the wave equation would then prove to be erroneous. The second task covered an initial investigation into the use of the Monte Carlo method for solution of acoustical field problems. Computed results are given for a rectangular room problem, and for a problem involving a circular duct with a source located at the closed end.

  16. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    SciTech Connect

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  17. Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems

    SciTech Connect

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana

    2016-01-07

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.

  18. Theory of melting at high pressures: Amending density functional theory with quantum Monte Carlo

    DOE PAGES

    Shulenburger, L.; Desjarlais, M. P.; Mattsson, T. R.

    2014-10-01

    We present an improved first-principles description of melting under pressure based on thermodynamic integration comparing Density Functional Theory (DFT) and quantum Monte Carlo (QMC) treatments of the system. The method is applied to address the longstanding discrepancy between density functional theory (DFT) calculations and diamond anvil cell (DAC) experiments on the melting curve of xenon, a noble gas solid where van der Waals binding is challenging for traditional DFT methods. The calculations show excellent agreement with data below 20 GPa and that the high-pressure melt curve is well described by a Lindemann behavior up to at least 80 GPa, amore » finding in stark contrast to DAC data.« less

  19. Monte Carlo simulation of portal dosimetry on a rectilinear voxel geometry: a variable gantry angle solution.

    PubMed

    Chin, P W; Spezi, E; Lewis, D G

    2003-08-21

    A software solution has been developed to carry out Monte Carlo simulations of portal dosimetry using the BEAMnrc/DOSXYZnrc code at oblique gantry angles. The solution is based on an integrated phantom, whereby the effect of incident beam obliquity was included using geometric transformations. Geometric transformations are accurate within +/- 1 mm and +/- 1 degrees with respect to exact values calculated using trigonometry. An application in portal image prediction of an inhomogeneous phantom demonstrated good agreement with measured data, where the root-mean-square of the difference was under 2% within the field. Thus, we achieved a dose model framework capable of handling arbitrary gantry angles, voxel-by-voxel phantom description and realistic particle transport throughout the geometry.

  20. Spheroid Formation of Hepatocarcinoma Cells in Microwells: Experiments and Monte Carlo Simulations

    PubMed Central

    Tabaei, Seyed R.; Park, Jae Hyeok; Na, Kyuhwan; Chung, Seok; Zhdanov, Vladimir P.

    2016-01-01

    The formation of spherical aggregates during the growth of cell population has long been observed under various conditions. We observed the formation of such aggregates during proliferation of Huh-7.5 cells, a human hepatocarcinoma cell line, in a microfabricated low-adhesion microwell system (SpheroFilm; formed of mass-producible silicone elastomer) on the length scales up to 500 μm. The cell proliferation was also tracked with immunofluorescence staining of F-actin and cell proliferation marker Ki-67. Meanwhile, our complementary 3D Monte Carlo simulations, taking cell diffusion and division, cell-cell and cell-scaffold adhesion, and gravity into account, illustrate the role of these factors in the formation of spheroids. Taken together, our experimental and simulation results provide an integrative view of the process of spheroid formation for Huh-7.5 cells. PMID:27571565

  1. Political project of adolescent care in São Carlos, Brazil.

    PubMed

    Eduardo, Lara de Paula; Egry, Emiko Yoshikawa

    2007-01-01

    The Brazilian Child and Adolescent Statute was established in 1990. Since then many institutions have been created to attend adolescents. This study aimed to understand how these institutions have been organized in São Carlos, Northeast of São Paulo, Brazil. This is a descriptive study, whose data were collected through interviews with the directors of 20 institutions. They reported differences in terms of objectives, target public age, religious orientation, etc. While most institutions have focused on leisure activities and professional education, some of them attend only adolescents who have committed some kind of illegal act. Although there are many different ways to assist adolescents, it seems that their actions are not integrated towards the implementation of the Child and Adolescent Statute.

  2. Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.

    PubMed

    Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L

    2013-01-01

    It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches.

  3. Monte Carlo simulation of non-invasive glucose measurement based on FMCW LIDAR

    NASA Astrophysics Data System (ADS)

    Xiong, Bing; Wei, Wenxiong; Liu, Nan; He, Jian-Jun

    2010-11-01

    Continuous non-invasive glucose monitoring is a powerful tool for the treatment and management of diabetes. A glucose measurement method, with the potential advantage of miniaturizability with no moving parts, based on the frequency modulated continuous wave (FMCW) LIDAR technology is proposed and investigated. The system mainly consists of an integrated near-infrared tunable semiconductor laser and a detector, using heterodyne technology to convert the signal from time-domain to frequency-domain. To investigate the feasibility of the method, Monte Carlo simulations have been performed on tissue phantoms with optical parameters similar to those of human interstitial fluid. The simulation showed that the sensitivity of the FMCW LIDAR system to glucose concentration can reach 0.2mM. Our analysis suggests that the FMCW LIDAR technique has good potential for noninvasive blood glucose monitoring.

  4. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    SciTech Connect

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amount of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)

  5. Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer

    NASA Astrophysics Data System (ADS)

    Castonguay, Thomas C.; Wang, Feng

    2008-03-01

    In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

  6. GPU accelerated Monte Carlo simulation of Brownian motors dynamics with CUDA

    NASA Astrophysics Data System (ADS)

    Spiechowicz, J.; Kostur, M.; Machura, L.

    2015-06-01

    This work presents an updated and extended guide on methods of a proper acceleration of the Monte Carlo integration of stochastic differential equations with the commonly available NVIDIA Graphics Processing Units using the CUDA programming environment. We outline the general aspects of the scientific computing on graphics cards and demonstrate them with two models of a well known phenomenon of the noise induced transport of Brownian motors in periodic structures. As a source of fluctuations in the considered systems we selected the three most commonly occurring noises: the Gaussian white noise, the white Poissonian noise and the dichotomous process also known as a random telegraph signal. The detailed discussion on various aspects of the applied numerical schemes is also presented. The measured speedup can be of the astonishing order of about 3000 when compared to a typical CPU. This number significantly expands the range of problems solvable by use of stochastic simulations, allowing even an interactive research in some cases.

  7. Applications of the Monte Carlo Code Geant to Particle Beam Therapy

    NASA Astrophysics Data System (ADS)

    Szymanowski, H.; Fuchs, T.; Nill, S.; Wilkens, J. J.; Pflugfelder, D.; Oelfke, U.; Glinec, Y.; Faure, J.; Malka, V.

    2006-04-01

    We report on the use of the Monte Carlo simulation code GEANT for two different applications in the field of particle beam therapy. The first application relates to the planning of intensity-modulated proton therapy (IMPT) treatments. An important issue is thereby the accurate prediction of the dose while irradiating complex inhomogeneous patient geometries. We developed an improved method to account for tissue inhomogeneities in pencil beam algorithms. We show that GEANT3 can be successfully used to validate the new model before its integration in our treatment planning system. Another project concerns the investigation of the potential of high-energy particles produced by laser-plasma interactions for radiotherapy. GEANT4 simulations of the dosimetric properties of an experimental laser-accelerated electron beam were performed. They show that this technique may be very attractive for the development of new therapy beam modalities such as very-high energy (170 MeV) electrons.

  8. Quantum dynamics at finite temperature: Time-dependent quantum Monte Carlo study

    SciTech Connect

    Christov, Ivan P.

    2016-08-15

    In this work we investigate the ground state and the dissipative quantum dynamics of interacting charged particles in an external potential at finite temperature. The recently devised time-dependent quantum Monte Carlo (TDQMC) method allows a self-consistent treatment of the system of particles together with bath oscillators first for imaginary-time propagation of Schrödinger type of equations where both the system and the bath converge to their finite temperature ground state, and next for real time calculation where the dissipative dynamics is demonstrated. In that context the application of TDQMC appears as promising alternative to the path-integral related techniques where the real time propagation can be a challenge.

  9. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  10. Global Monte Carlo Simulation with High Order Polynomial Expansions

    SciTech Connect

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-12-13

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source

  11. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  12. Incorporation of Monte-Carlo Computer Techniques into Science and Mathematics Education.

    ERIC Educational Resources Information Center

    Danesh, Iraj

    1987-01-01

    Described is a Monte-Carlo method for modeling physical systems with a computer. Also discussed are ways to incorporate Monte-Carlo simulation techniques for introductory science and mathematics teaching and also for enriching computer and simulation courses. (RH)

  13. Verification of SMART Neutronics Design Methodology by the MCNAP Monte Carlo Code

    SciTech Connect

    Jong Sung Chung; Kyung Jin Shim; Chang Hyo Kim; Chungchan Lee; Sung Quun Zee

    2000-11-12

    SMART is a small advanced integral pressurized water reactor (PWR) of 330 MW(thermal) designed for both electricity generation and seawater desalinization. The CASMO-3/MASTER nuclear analysis system, a design-basis of Korean PWR plants, has been employed for the SMART core nuclear design and analysis because the fuel assembly (FA) characteristics and reactor operating conditions in temperature and pressure are similar to those of PWR plants. However, the SMART FAs are highly poisoned with more than 20 Al{sub 2}O{sub 3}-B{sub 4}C plus additional Gd{sub 2}O{sub 3}/UO{sub 2} BPRs each FA. The reactor is operated with control rods inserted. Therefore, the flux and power distribution may become more distorted than those of commercial PWR plants. In addition, SMART should produce power from room temperature to hot-power operating condition because it employs nuclear heating from room temperature. This demands reliable predictions of core criticality, shutdown margin, control rod worth, power distributions, and reactivity coefficients at both room temperature and hot operating condition, yet no such data are available to verify the CASMO-3/MASTER (hereafter MASTER) code system. In the absence of experimental verification data for the SMART neutronics design, the Monte Carlo depletion analysis program MCNAP is adopted as near-term alternatives for qualifying MASTER neutronics design calculations. The MCNAP is a personal computer-based continuous energy Monte Carlo neutronics analysis program written in C++ language. We established its qualification by presenting its prediction accuracy on measurements of Venus critical facilities and core neutronics analysis of a PWR plant in operation, and depletion characteristics of integral burnable absorber FAs of the current PWR. Here, we present a comparison of MASTER and MCNAP neutronics design calculations for SMART and establish the qualification of the MASTER system.

  14. Monte Carlo uncertainty assessment of ultrasonic beam parameters from immersion transducers used to non-destructive testing.

    PubMed

    Alvarenga, A V; Silva, C E R; Costa-Félix, R P B

    2016-07-01

    The uncertainty of ultrasonic beam parameters from non-destructive testing immersion probes was evaluated using the Guide to the expression of uncertainty in measurement (GUM) uncertainty framework and Monte Carlo Method simulation. The calculated parameters such as focal distance, focal length, focal widths and beam divergence were determined according to EN 12668-2. The typical system configuration used during the mapping acquisition comprises a personal computer connected to an oscilloscope, a signal generator, axes movement controllers, and a water bath. The positioning system allows moving the transducer (or hydrophone) in the water bath. To integrate all system components, a program was developed to allow controlling all the axes, acquire waterborne signals, and calculate essential parameters to assess and calibrate US transducers. All parameters were calculated directly from the raster scans of axial and transversal beam profiles, except beam divergence. Hence, the positioning system resolution and the step size are principal source of uncertainty. Monte Carlo Method simulations were performed by another program that generates pseudo-random samples for the distributions of the involved quantities. In all cases, there were found statistical differences between Monte Carlo and GUM methods.

  15. Parallel Monte Carlo Particle Transport and the Quality of Random Number Generators: How Good is Good Enough?

    SciTech Connect

    Procassini, R J; Beck, B R

    2004-12-07

    It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results.

  16. Monte Carlo particle-in-cell methods for the simulation of the Vlasov-Maxwell gyrokinetic equations

    NASA Astrophysics Data System (ADS)

    Bottino, A.; Sonnendrücker, E.

    2015-10-01

    > The particle-in-cell (PIC) algorithm is the most popular method for the discretisation of the general 6D Vlasov-Maxwell problem and it is widely used also for the simulation of the 5D gyrokinetic equations. The method consists of coupling a particle-based algorithm for the Vlasov equation with a grid-based method for the computation of the self-consistent electromagnetic fields. In this review we derive a Monte Carlo PIC finite-element model starting from a gyrokinetic discrete Lagrangian. The variations of the Lagrangian are used to obtain the time-continuous equations of motion for the particles and the finite-element approximation of the field equations. The Noether theorem for the semi-discretised system implies a certain number of conservation properties for the final set of equations. Moreover, the PIC method can be interpreted as a probabilistic Monte Carlo like method, consisting of calculating integrals of the continuous distribution function using a finite set of discrete markers. The nonlinear interactions along with numerical errors introduce random effects after some time. Therefore, the same tools for error analysis and error reduction used in Monte Carlo numerical methods can be applied to PIC simulations.

  17. Thermal-to-fusion neutron convertor and Monte Carlo coupled simulation of deuteron/triton transport and secondary products generation

    NASA Astrophysics Data System (ADS)

    Wang, Guan-bo; Liu, Han-gang; Wang, Kan; Yang, Xin; Feng, Qi-jie

    2012-09-01

    Thermal-to-fusion neutron convertor has being studied in China Academy of Engineering Physics (CAEP). Current Monte Carlo codes, such as MCNP and GEANT, are inadequate when applied in this multi-step reactions problems. A Monte Carlo tool RSMC (Reaction Sequence Monte Carlo) has been developed to simulate such coupled problem, from neutron absorption, to charged particle ionization and secondary neutron generation. "Forced particle production" variance reduction technique has been implemented to improve the calculation speed distinctly by making deuteron/triton induced secondary product plays a major role. Nuclear data is handled from ENDF or TENDL, and stopping power from SRIM, which described better for low energy deuteron/triton interactions. As a validation, accelerator driven mono-energy 14 MeV fusion neutron source is employed, which has been deeply studied and includes deuteron transport and secondary neutron generation. Various parameters, including fusion neutron angle distribution, average neutron energy at different emission directions, differential and integral energy distributions, are calculated with our tool and traditional deterministic method as references. As a result, we present the calculation results of convertor with RSMC, including conversion ratio of 1 mm 6LiD with a typical thermal neutron (Maxwell spectrum) incidence, and fusion neutron spectrum, which will be used for our experiment.

  18. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the

  19. A simple new way to help speed up Monte Carlo convergence rates: Energy-scaled displacement Monte Carlo

    NASA Astrophysics Data System (ADS)

    Goldman, Saul

    1983-10-01

    A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.

  20. Study the spin configuration and the saturation magnetization of manganese-zinc ferrite nanoparticles by the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Rodionov, V. A.; Zhuravlev, V. A.

    2017-01-01

    In this work, the simulations of magnetic properties of nano-sized manganese ferrite particles with zinc replacement were performed. The percentage of replacement laid in range from 0% to 80%. The parameters of particles, including exchange integrals, were taken from experimental data received for MnxZn1-xFe2O4. The sizes of particles and thickness of defective surface layer were taken, taking into account real sizes distribution for manganese nanoparticles received by the way of mechanochemical synthesis. Simulations were performed using the Monte-Carlo methods, Metropolis algorithm.

  1. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  2. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    SciTech Connect

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  3. MONITOR- MONTE CARLO INVESTIGATION OF TRAJECTORY OPERATIONS AND REQUIREMENTS

    NASA Technical Reports Server (NTRS)

    Glass, A. B.

    1994-01-01

    The Monte Carlo Investigation of Trajectory Operations and Requirements (MONITOR) program was developed to perform spacecraft mission maneuver simulations for geosynchronous, single maneuver, and comet encounter type trajectories. MONITOR is a multifaceted program which enables the modeling of various orbital sequences and missions, the generation of Monte Carlo simulation statistics, and the parametric scanning of user requested variables over specified intervals. The MONITOR program has been used primarily to study geosynchronous missions and has the capability to model Space Shuttle deployed satellite trajectories. The ability to perform a Monte Carlo error analysis of user specified orbital parameters using predicted maneuver execution errors can make MONITOR a significant part of any mission planning and analysis system. The MONITOR program can be executed in four operational modes. In the first mode, analytic state covariance matrix propagation is performed using state transition matrices for the coasting and powered burn phases of the trajectory. A two-body central force field is assumed throughout the analysis. Histograms of the final orbital elements and other state dependent variables may be evaluated by a Monte Carlo analysis. In the second mode, geosynchronous missions can be simulated from parking orbit injection through station acquisition. A two-body central force field is assumed throughout the simulation. Nominal mission studies can be conducted; however, the main use of this mode lies in evaluating the behavior of pertinent orbital trajectory parameters by making use of a Monte Carlo analysis. In the third mode, MONITOR performs parametric scans of user-requested variables for a nominal mission. Various orbital sequences may be specified; however, primary use is devoted to geosynchronous missions. A maximum of five variables may be scanned at a time. The fourth mode simulates a mission from orbit injection through comet encounter with optional

  4. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  5. Monte Carlo simulation of particle acceleration at astrophysical shocks

    NASA Technical Reports Server (NTRS)

    Campbell, Roy K.

    1989-01-01

    A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better

  6. Properties of the two-dimensional heterogeneous Lennard-Jones dimers: An integral equation study

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz

    2016-11-01

    Structural and thermodynamic properties of a planar heterogeneous soft dumbbell fluid are examined using Monte Carlo simulations and integral equation theory. Lennard-Jones particles of different sizes are the building blocks of the dimers. The site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions and the thermodynamic properties. Obtained results are compared to Monte Carlo simulation data. The critical parameters for selected types of dimers were also estimated and the influence of the Lennard-Jones parameters was studied. We have also tested the correctness of the site-site integral equation theory using different closures.

  7. Properties of the two-dimensional heterogeneous Lennard-Jones dimers: An integral equation study.

    PubMed

    Urbic, Tomaz

    2016-11-21

    Structural and thermodynamic properties of a planar heterogeneous soft dumbbell fluid are examined using Monte Carlo simulations and integral equation theory. Lennard-Jones particles of different sizes are the building blocks of the dimers. The site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions and the thermodynamic properties. Obtained results are compared to Monte Carlo simulation data. The critical parameters for selected types of dimers were also estimated and the influence of the Lennard-Jones parameters was studied. We have also tested the correctness of the site-site integral equation theory using different closures.

  8. New Binary Integration Strategies and Corresponding R90 Calculations

    DTIC Science & Technology

    1993-09-23

    radar some simple logic criterion i used. This thesis evaluates the performance of a new binary integration technique . This technique requires M hits out...of N looks with x<M hits being consecutive. Closed form expressions for the cumulv~tive probability of detection are derived and Monte Carlo methods ...thesis evaluates the performance of a new binary integration technique . This technique requires M hits out of N looks with x<M hits being consecutive

  9. Quantum tunneling splittings from path-integral molecular dynamics

    NASA Astrophysics Data System (ADS)

    Mátyus, Edit; Wales, David J.; Althorpe, Stuart C.

    2016-03-01

    We illustrate how path-integral molecular dynamics can be used to calculate ground-state tunnelling splittings in molecules or clusters. The method obtains the splittings from ratios of density matrix elements between the degenerate wells connected by the tunnelling. We propose a simple thermodynamic integration scheme for evaluating these elements. Numerical tests on fully dimensional malonaldehyde yield tunnelling splittings in good overall agreement with the results of diffusion Monte Carlo calculations.

  10. Design of composite laminates by a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Fang, Chin; Springer, George S.

    1993-01-01

    A Monte Carlo procedure was developed for optimizing symmetric fiber reinforced composite laminates such that the weight is minimum and the Tsai-Wu strength failure criterion is satisfied in each ply. The laminate may consist of several materials including an idealized core, and may be subjected to several sets of combined in-plane and bending loads. The procedure yields the number of plies, the fiber orientation, and the material of each ply and the material and thickness of the core. A user friendly computer code was written for performing the numerical calculations. Laminates optimized by the code were compared to laminates resulting from existing optimization methods. These comparisons showed that the present Monte Carlo procedure is a useful and efficient tool for the design of composite laminates.

  11. Relaxation dynamics in small clusters: A modified Monte Carlo approach

    SciTech Connect

    Pal, Barnana

    2008-02-01

    Relaxation dynamics in two-dimensional atomic clusters consisting of mono-atomic particles interacting through Lennard-Jones (L-J) potential has been investigated using Monte Carlo simulation. A modification of the conventional Metropolis algorithm is proposed to introduce realistic thermal motion of the particles moving in the interacting L-J potential field. The proposed algorithm leads to a quick equilibration from the nonequilibrium cluster configuration in a certain temperature regime, where the relaxation time ({tau}), measured in terms of Monte Carlo Steps (MCS) per particle, vary inversely with the square root of system temperature ({radical}T) and pressure (P); {tau} {proportional_to} (P{radical}T){sup -1}. From this a realistic correlation between MCS and time has been predicted.

  12. Monte Carlo Methods for Bridging the Timescale Gap

    NASA Astrophysics Data System (ADS)

    Wilding, Nigel; Landau, David P.

    We identify the origin, and elucidate the character of the extended time-scales that plague computer simulation studies of first and second order phase transitions. A brief survey is provided of a number of new and existing techniques that attempt to circumvent these problems. Attention is then focused on two novel methods with which we have particular experience: “Wang-Landau sampling” and Phase Switch Monte Carlo. Detailed case studies are made of the application of the Wang-Landau approach to calculate the density of states of the 2D Ising model and the Edwards-Anderson spin glass. The principles and operation of Phase Switch Monte Carlo are described and its utility in tackling ‘difficult’ first order phase transitions is illustrated via a case study of hard-sphere freezing. We conclude with a brief overview of promising new methods for the improvement of deterministic, spin dynamics simulations.

  13. Monte Carlo Ground State Energy for Trapped Boson Systems

    NASA Astrophysics Data System (ADS)

    Rudd, Ethan; Mehta, N. P.

    2012-06-01

    Diffusion Monte Carlo (DMC) and Green's Function Monte Carlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0φ 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).

  14. Application of Monte Carlo simulations to improve basketball shooting strategy

    NASA Astrophysics Data System (ADS)

    Min, Byeong June

    2016-10-01

    The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.

  15. A tetrahedron-based inhomogeneous Monte Carlo optical simulator.

    PubMed

    Shen, H; Wang, G

    2010-02-21

    Optical imaging has been widely applied in preclinical and clinical applications. Fifteen years ago, an efficient Monte Carlo program 'MCML' was developed for use with multi-layered turbid media and has gained popularity in the field of biophotonics. Currently, there is an increasingly pressing need for simulating tools more powerful than MCML in order to study light propagation phenomena in complex inhomogeneous objects, such as the mouse. Here we report a tetrahedron-based inhomogeneous Monte Carlo optical simulator (TIM-OS) to address this issue. By modeling an object as a tetrahedron-based inhomogeneous finite-element mesh, TIM-OS can determine the photon-triangle interaction recursively and rapidly. In numerical simulation, we have demonstrated the correctness and efficiency of TIM-OS.

  16. Radiotherapy Monte Carlo simulation using cloud computing technology.

    PubMed

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  17. Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography

    SciTech Connect

    Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.

    2000-02-01

    The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.

  18. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  19. Minimising biases in full configuration interaction quantum Monte Carlo.

    PubMed

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  20. Estimation of beryllium ground state energy by Monte Carlo simulation

    SciTech Connect

    Kabir, K. M. Ariful; Halder, Amal

    2015-05-15

    Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.