Sample records for carlo sampling methods

  1. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  2. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  3. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  4. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  6. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  7. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  8. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  9. Monte Carlo sampling in diffusive dynamical systems

    NASA Astrophysics Data System (ADS)

    Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.

    2018-05-01

    We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.

  10. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  11. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  12. Multiple-time-stepping generalized hybrid Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less

  13. Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo

    NASA Astrophysics Data System (ADS)

    Arya, Gaurav; Schlick, Tamar

    2007-01-01

    We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.

  14. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  15. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.

  16. Renyi entanglement entropy of interacting fermions calculated using the continuous-time quantum Monte Carlo method.

    PubMed

    Wang, Lei; Troyer, Matthias

    2014-09-12

    We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.

  17. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  18. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  19. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less

  20. Monte Carlo approaches to sampling forested tracts with lines or points

    Treesearch

    Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire

    2001-01-01

    Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...

  1. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  2. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  3. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    PubMed Central

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525

  4. Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; Simutė, S.

    2017-12-01

    We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.

  5. LMC: Logarithmantic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  6. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  7. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  8. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  9. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  10. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  11. Data Analysis Recipes: Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Hogg, David W.; Foreman-Mackey, Daniel

    2018-05-01

    Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .

  12. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  13. Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications

    NASA Astrophysics Data System (ADS)

    Li, Fusheng

    Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder platform and has a user-friendly interface to accomplish all qualitative and quantitative tasks easily. That is to say, the software enables users to run the forward Monte Carlo simulation (if necessary) or use previously calculated Monte Carlo library spectra to obtain the sample elemental composition estimation within a minute. The GUI software is easy to use with user-friendly features and has the capability to accomplish all related tasks in a visualization environment. It can be a powerful tool for EDXRF analysts. A reproducible experiment setup has been built and experiments have been performed to benchmark the system. Two types of Standard Reference Materials (SRM), stainless steel samples from National Institute of Standards and Technology (NIST) and aluminum alloy samples from Alcoa Inc., with certified elemental compositions, are tested with this reproducible prototype system using a 109Cd radioisotope source (20mCi) and a liquid nitrogen cooled Si(Li) detector. The results show excellent agreement between the calculated sample compositions and their reference values and the approach is very fast.

  14. SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    2006-01-01

    Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…

  15. Replica-exchange Wang Landau sampling: pushing the limits of Monte Carlo simulations in materials sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus

    We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

  16. Driven-dissipative quantum Monte Carlo method for open quantum systems

    NASA Astrophysics Data System (ADS)

    Nagy, Alexandra; Savona, Vincenzo

    2018-05-01

    We develop a real-time full configuration-interaction quantum Monte Carlo approach to model driven-dissipative open quantum systems with Markovian system-bath coupling. The method enables stochastic sampling of the Liouville-von Neumann time evolution of the density matrix thanks to a massively parallel algorithm, thus providing estimates of observables on the nonequilibrium steady state. We present the underlying theory and introduce an initiator technique and importance sampling to reduce the statistical error. Finally, we demonstrate the efficiency of our approach by applying it to the driven-dissipative two-dimensional X Y Z spin-1/2 model on a lattice.

  17. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  18. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  19. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  20. Evaluation of gamma-ray attenuation properties of bismuth borate glass systems using Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Tarim, Urkiye Akar; Ozmutlu, Emin N.; Yalcin, Sezai; Gundogdu, Ozcan; Bradley, D. A.; Gurler, Orhan

    2017-11-01

    A Monte Carlo method was developed to investigate radiation shielding properties of bismuth borate glass. The mass attenuation coefficients and half-value layer parameters were determined for different fractional amounts of Bi2O3 in the glass samples for the 356, 662, 1173 and 1332 keV photon energies. A comparison of the theoretical and experimental attenuation coefficients is presented.

  1. Implementation and testing of the on-the-fly thermal scattering Monte Carlo sampling method for graphite and light water in MCNP6

    DOE PAGES

    Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.

    2016-01-23

    Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less

  2. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  3. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    PubMed

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  5. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  6. Monte Carlo Sampling in Fractal Landscapes

    NASA Astrophysics Data System (ADS)

    Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.

    2013-05-01

    We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.

  7. Wang-Landau method for calculating Rényi entropies in finite-temperature quantum Monte Carlo simulations.

    PubMed

    Inglis, Stephen; Melko, Roger G

    2013-01-01

    We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.

  8. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  9. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  10. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  12. Constant-pressure nested sampling with atomistic dynamics

    NASA Astrophysics Data System (ADS)

    Baldock, Robert J. N.; Bernstein, Noam; Salerno, K. Michael; Pártay, Lívia B.; Csányi, Gábor

    2017-10-01

    The nested sampling algorithm has been shown to be a general method for calculating the pressure-temperature-composition phase diagrams of materials. While the previous implementation used single-particle Monte Carlo moves, these are inefficient for condensed systems with general interactions where single-particle moves cannot be evaluated faster than the energy of the whole system. Here we enhance the method by using all-particle moves: either Galilean Monte Carlo or the total enthalpy Hamiltonian Monte Carlo algorithm, introduced in this paper. We show that these algorithms enable the determination of phase transition temperatures with equivalent accuracy to the previous method at 1 /N of the cost for an N -particle system with general interactions, or at equal cost when single-particle moves can be done in 1 /N of the cost of a full N -particle energy evaluation. We demonstrate this speed-up for the freezing and condensation transitions of the Lennard-Jones system and show the utility of the algorithms by calculating the order-disorder phase transition of a binary Lennard-Jones model alloy, the eutectic of copper-gold, the density anomaly of water, and the condensation and solidification of bead-spring polymers. The nested sampling method with all three algorithms is implemented in the pymatnest software.

  13. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  14. Many-body optimization using an ab initio monte carlo method.

    PubMed

    Haubein, Ned C; McMillan, Scott A; Broadbelt, Linda J

    2003-01-01

    Advances in computing power have made it possible to study solvated molecules using ab initio quantum chemistry. Inclusion of discrete solvent molecules is required to determine geometric information about solute/solvent clusters. Monte Carlo methods are well suited to finding minima in many-body systems, and ab initio methods are applicable to the widest range of systems. A first principles Monte Carlo (FPMC) method was developed to find minima in many-body systems, and emphasis was placed on implementing moves that increase the likelihood of finding minimum energy structures. Partial optimization and molecular interchange moves aid in finding minima and overcome the incomplete sampling that is unavoidable when using ab initio methods. FPMC was validated by studying the boron trifluoride-water system, and then the method was used to examine the methyl carbenium ion in water to demonstrate its application to solvation problems.

  15. Importance Sampling Variance Reduction in GRESS ATMOSIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wakeford, Daniel Tyler

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  16. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    PubMed

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. © 2011 Acoustical Society of America

  17. Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems

    NASA Astrophysics Data System (ADS)

    Nieciąg, Halina

    2015-10-01

    Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.

  18. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  19. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  20. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  1. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    NASA Astrophysics Data System (ADS)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  2. FW-CADIS Method for Global and Semi-Global Variance Reduction of Monte Carlo Radiation Transport Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2014-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less

  3. Monte Carlo source simulation technique for solution of interference reactions in INAA experiments: a preliminary report

    NASA Astrophysics Data System (ADS)

    Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.

    2004-04-01

    A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.

  4. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  5. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  6. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  7. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  8. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    PubMed

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  9. Some connections between importance sampling and enhanced sampling methods in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Lie, H. C.; Quer, J.

    2017-11-01

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  10. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    PubMed

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  11. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  12. Adapting phase-switch Monte Carlo method for flexible organic molecules

    NASA Astrophysics Data System (ADS)

    Bridgwater, Sally; Quigley, David

    2014-03-01

    The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.

  13. Reactive Monte Carlo sampling with an ab initio potential

    NASA Astrophysics Data System (ADS)

    Leiding, Jeff; Coe, Joshua D.

    2016-05-01

    We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.

  14. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuomas, V.; Jaakko, L.

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomesmore » a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)« less

  15. The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter

    2017-03-01

    Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.

  16. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  17. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  18. Latin hypercube sampling and geostatistical modeling of spatial uncertainty in a spatially explicit forest landscape model simulation

    Treesearch

    Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu

    2005-01-01

    Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...

  19. Large-cell Monte Carlo renormalization of irreversible growth processes

    NASA Technical Reports Server (NTRS)

    Nakanishi, H.; Family, F.

    1985-01-01

    Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.

  20. Multilevel Mixture Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, Dong; Wang, Xiaodong; Chen, Rong

    2004-12-01

    The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.

  1. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  2. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  3. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  4. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    NASA Technical Reports Server (NTRS)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  5. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  6. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  7. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  8. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  9. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  10. Data collection strategies for time-resolved X-ray free-electron laser diffraction, and 2-color methods

    PubMed Central

    Li, Chufeng; Schmidt, Kevin; Spence, John C.

    2015-01-01

    We compare three schemes for time-resolved X-ray diffraction from protein nanocrystals using an X-ray free-electron laser. We find expressions for the errors in structure factor measurement using the Monte Carlo pump-probe method of data analysis with a liquid jet, the fixed sample pump-probe (goniometer) method (both diffract-and-destroy, and below the safe damage dose), and a proposed two-color method. Here, an optical pump pulse arrives between X-ray pulses of slightly different energies which hit the same nanocrystal, using a weak first X-ray pulse which does not damage the sample. (Radiation damage is outrun in the other cases.) This two-color method, in which separated Bragg spots are impressed on the same detector readout, eliminates stochastic fluctuations in crystal size, shape, and orientation and is found to require two orders of magnitude fewer diffraction patterns than the currently used Monte Carlo liquid jet method, for 1% accuracy. Expressions are given for errors in structure factor measurement for the four approaches, and detailed simulations provided for cathepsin B and IC3 crystals. While the error is independent of the number of shots for the dose-limited goniometer method, it falls off inversely as the square root of the number of shots for the two-color and Monte Carlo methods, with a much smaller pre-factor for the two-color mode, when the first shot is below the damage threshold. PMID:26798813

  11. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  12. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  13. Estimation of distributional parameters for censored trace level water quality data: 2. Verification and applications

    USGS Publications Warehouse

    Helsel, Dennis R.; Gilliom, Robert J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.

  14. The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter

    2017-01-01

    Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.

  15. Reactive Monte Carlo sampling with an ab initio potential

    DOE PAGES

    Leiding, Jeff; Coe, Joshua D.

    2016-05-04

    Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH 3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state spacemore » for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less

  16. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  17. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  18. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  19. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  20. A histogram-free multicanonical Monte Carlo algorithm for the construction of analytical density of states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai

    We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less

  1. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  2. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, A; Zbijewski, W; Bolch, W

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less

  3. Stochastic Investigation of Natural Frequency for Functionally Graded Plates

    NASA Astrophysics Data System (ADS)

    Karsh, P. K.; Mukhopadhyay, T.; Dey, S.

    2018-03-01

    This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.

  4. Monte Carlo algorithms for Brownian phylogenetic models.

    PubMed

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-11-01

    Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  6. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  7. Evaluation and characterization of thyroid-disrupting activities in soil samples along the Second Songhua River, China.

    PubMed

    Kong, Dongdong; Wang, Yafei; Wang, Jinsheng; Teng, Yanguo; Li, Na; Li, Jian

    2016-11-01

    In this study, a recombinant thyroid receptor (TR) gene yeast assay combined with Monte Carlo simulation were used to evaluate and characterize soil samples collected from Jilin (China) along the Second Songhua River, for their ant/agonist effect on TR. No TR agonistic activity was found in soils, but many soil samples exhibited TR antagonistic activities, and the bioassay-derived amiodarone hydrochloride equivalents, which was calculated based on Monte Carlo simulation, ranged from not detected (N.D.) to 35.5μg/g. Hydrophilic substance fractions were determined to be the contributors to TR antagonistic activity in these soil samples. Our results indicate that the novel calculation method is effective for the quantification and characterization of TR antagonists in soil samples, and these data could provide useful information for future management and remediation efforts for contaminated soils. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Binding Modes of Ligands Using Enhanced Sampling (BLUES): Rapid Decorrelation of Ligand Binding Modes via Nonequilibrium Candidate Monte Carlo.

    PubMed

    Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L

    2018-05-31

    Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.

  9. Improved non-destructive method for 90Sr activity determination in aqueous solutions using Monte Carlo simulation.

    PubMed

    Samardžić, Selena; Milošević, Miodrag; Todorović, Nataša; Lakatoš, Robert

    2018-04-04

    The development of new methods and improvements of existing methods for the specific activity determination of 90 Sr and other distinct beta emitters has been of considerable interest. The reason for this interest is that the notably small number of methods that are able to meet all the set criteria, such as reliability of the results, measurement uncertainty and time, and minimum production of radioactive waste, as well as applicability to various samples with reference to their nature, geometry and composition. In this paper, two methods for rapid 90 Sr activity determination based on Monte Carlo simulations are used, one for a Si semiconductor detector for beta spectrometric measurements and the other for the Geiger-Muller (GM) ionization probe. To improve the reliability of the measurement results, samples with high and low strontium activity solutions were prepared in the form of dry residues. The results of the proposed methodology were verified with a standard method using a liquid scintillation counter, and notably good agreements are achieved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  11. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  12. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  13. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  14. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  15. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    NASA Astrophysics Data System (ADS)

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  16. Subtle Monte Carlo Updates in Dense Molecular Systems.

    PubMed

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  18. Deriving photometric redshifts using fuzzy archetypes and self-organizing maps - I. Methodology

    NASA Astrophysics Data System (ADS)

    Speagle, Joshua S.; Eisenstein, Daniel J.

    2017-07-01

    We propose a method to substantially increase the flexibility and power of template fitting-based photometric redshifts by transforming a large number of galaxy spectral templates into a corresponding collection of 'fuzzy archetypes' using a suitable set of perturbative priors designed to account for empirical variation in dust attenuation and emission-line strengths. To bypass widely separated degeneracies in parameter space (e.g. the redshift-reddening degeneracy), we train self-organizing maps (SOMs) on large 'model catalogues' generated from Monte Carlo sampling of our fuzzy archetypes to cluster the predicted observables in a topologically smooth fashion. Subsequent sampling over the SOM then allows full reconstruction of the relevant probability distribution functions (PDFs). This combined approach enables the multimodal exploration of known variation among galaxy spectral energy distributions with minimal modelling assumptions. We demonstrate the power of this approach to recover full redshift PDFs using discrete Markov chain Monte Carlo sampling methods combined with SOMs constructed from Large Synoptic Survey Telescope ugrizY and Euclid YJH mock photometry.

  19. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  20. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  1. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    NASA Astrophysics Data System (ADS)

    Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan

    2017-11-01

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.

  2. A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses

    PubMed Central

    Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria

    2013-01-01

    Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367

  3. Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance

    ERIC Educational Resources Information Center

    Alsop, Brent

    2004-01-01

    Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…

  4. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    ERIC Educational Resources Information Center

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  5. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  6. Study of optical and electronic properties of nickel from reflection electron energy loss spectra

    NASA Astrophysics Data System (ADS)

    Xu, H.; Yang, L. H.; Da, B.; Tóth, J.; Tőkési, K.; Ding, Z. J.

    2017-09-01

    We use the classical Monte Carlo transport model of electrons moving near the surface and inside solids to reproduce the measured reflection electron energy-loss spectroscopy (REELS) spectra. With the combination of the classical transport model and the Markov chain Monte Carlo (MCMC) sampling of oscillator parameters the so-called reverse Monte Carlo (RMC) method was developed, and used to obtain optical constants of Ni in this work. A systematic study of the electronic and optical properties of Ni has been performed in an energy loss range of 0-200 eV from the measured REELS spectra at primary energies of 1000 eV, 2000 eV and 3000 eV. The reliability of our method was tested by comparing our results with the previous data. Moreover, the accuracy of our optical data has been confirmed by applying oscillator strength-sum rule and perfect-screening-sum rule.

  7. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  8. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  9. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  10. Sample size guidelines for fitting a lognormal probability distribution to censored most probable number data with a Markov chain Monte Carlo method.

    PubMed

    Williams, Michael S; Cao, Yong; Ebel, Eric D

    2013-07-15

    Levels of pathogenic organisms in food and water have steadily declined in many parts of the world. A consequence of this reduction is that the proportion of samples that test positive for the most contaminated product-pathogen pairings has fallen to less than 0.1. While this is unequivocally beneficial to public health, datasets with very few enumerated samples present an analytical challenge because a large proportion of the observations are censored values. One application of particular interest to risk assessors is the fitting of a statistical distribution function to datasets collected at some point in the farm-to-table continuum. The fitted distribution forms an important component of an exposure assessment. A number of studies have compared different fitting methods and proposed lower limits on the proportion of samples where the organisms of interest are identified and enumerated, with the recommended lower limit of enumerated samples being 0.2. This recommendation may not be applicable to food safety risk assessments for a number of reasons, which include the development of new Bayesian fitting methods, the use of highly sensitive screening tests, and the generally larger sample sizes found in surveys of food commodities. This study evaluates the performance of a Markov chain Monte Carlo fitting method when used in conjunction with a screening test and enumeration of positive samples by the Most Probable Number technique. The results suggest that levels of contamination for common product-pathogen pairs, such as Salmonella on poultry carcasses, can be reliably estimated with the proposed fitting method and samples sizes in excess of 500 observations. The results do, however, demonstrate that simple guidelines for this application, such as the proportion of positive samples, cannot be provided. Published by Elsevier B.V.

  11. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  12. A Comparison of Maximum-Likelihood and Asymptotically Distribution-Free Methods of Treating Incomplete Nonnormal Data

    ERIC Educational Resources Information Center

    Gold, Michael S.; Bentler, Peter M.; Kim, Kevin H.

    2003-01-01

    This article describes a Monte Carlo study of 2 methods for treating incomplete nonnormal data. Skewed, kurtotic data sets conforming to a single structured model, but varying in sample size, percentage of data missing, and missing-data mechanism, were produced. An asymptotically distribution-free available-case (ADFAC) method and structured-model…

  13. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  14. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  15. Combined Monte Carlo/torsion-angle molecular dynamics for ensemble modeling of proteins, nucleic acids and carbohydrates.

    PubMed

    Zhang, Weihong; Howell, Steven C; Wright, David W; Heindel, Andrew; Qiu, Xiangyun; Chen, Jianhan; Curtis, Joseph E

    2017-05-01

    We describe a general method to use Monte Carlo simulation followed by torsion-angle molecular dynamics simulations to create ensembles of structures to model a wide variety of soft-matter biological systems. Our particular emphasis is focused on modeling low-resolution small-angle scattering and reflectivity structural data. We provide examples of this method applied to HIV-1 Gag protein and derived fragment proteins, TraI protein, linear B-DNA, a nucleosome core particle, and a glycosylated monoclonal antibody. This procedure will enable a large community of researchers to model low-resolution experimental data with greater accuracy by using robust physics based simulation and sampling methods which are a significant improvement over traditional methods used to interpret such data. Published by Elsevier Inc.

  16. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  17. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  18. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  19. A response to Yu et al. "A forward-backward fragment assembling algorithm for the identification of genomic amplification and deletion breakpoints using high-density single nucleotide polymorphism (SNP) array", BMC Bioinformatics 2007, 8: 145.

    PubMed

    Rueda, Oscar M; Diaz-Uriarte, Ramon

    2007-10-16

    Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).

  20. Estimating the Uncertainty In Diameter Growth Model Predictions and Its Effects On The Uncertainty of Annual Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    2001-01-01

    Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...

  1. Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2017-01-01

    Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…

  2. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  3. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    PubMed

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  4. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based onmore » Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)« less

  5. New method of computing the contributions of graphs without lepton loops to the electron anomalous magnetic moment in QED

    NASA Astrophysics Data System (ADS)

    Volkov, Sergey

    2017-11-01

    This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.

  6. Annealed Importance Sampling Reversible Jump MCMC algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less

  7. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  8. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  9. Importance Sampling of Word Patterns in DNA and Protein Sequences

    PubMed Central

    Chan, Hock Peng; Chen, Louis H.Y.

    2010-01-01

    Abstract Monte Carlo methods can provide accurate p-value estimates of word counting test statistics and are easy to implement. They are especially attractive when an asymptotic theory is absent or when either the search sequence or the word pattern is too short for the application of asymptotic formulae. Naive direct Monte Carlo is undesirable for the estimation of small probabilities because the associated rare events of interest are seldom generated. We propose instead efficient importance sampling algorithms that use controlled insertion of the desired word patterns on randomly generated sequences. The implementation is illustrated on word patterns of biological interest: palindromes and inverted repeats, patterns arising from position-specific weight matrices (PSWMs), and co-occurrences of pairs of motifs. PMID:21128856

  10. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Monte Carlo simulation of optical clearing of paper in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Kirillin, M. Yu; Priezzhev, A. V.; Hast, J.; Myllylä, Risto

    2006-02-01

    Signals of an optical coherence tomograph from paper samples are calculated by the Monte Carlo method before and after the action of different immersion liquids such as ethanol, glycerol, benzyl alcohol, and 1-pentanol. It is shown within the framework of the model used that all these liquids reduce the contrast of the inhomogeneity image in upper layers of the samples, considerably improving, however, the visibility of lower layers, allowing the localisation of the rear boundary of a medium being probed, which is important for precision contactless measuring a paper sheet thickness, for example, during the manufacturing process. The results of calculations are in well agreement with experimental data.

  11. MontePython 3: Parameter inference code for cosmology

    NASA Astrophysics Data System (ADS)

    Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon

    2018-05-01

    MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.

  12. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  14. (U) Introduction to Monte Carlo Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  15. Path integral Monte Carlo ground state approach: formalism, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2017-11-01

    Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.

  16. A New Approach to Monte Carlo Simulations in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  17. The Wang-Landau Sampling Algorithm

    NASA Astrophysics Data System (ADS)

    Landau, David P.

    2003-03-01

    Over the past several decades Monte Carlo simulations[1] have evolved into a powerful tool for the study of wide-ranging problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, usually in the canonical ensemble, and enormous improvements have been made in performance through the implementation of novel algorithms. Nonetheless, difficulties arise near phase transitions, either due to critical slowing down near 2nd order transitions or to metastability near 1st order transitions, thus limiting the applicability of the method. We shall describe a new and different Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is estimated, all thermodynamic properties can be calculated at all temperatures. This approach can be extended to multi-dimensional parameter spaces and has already found use in classical models of interacting particles including systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc., as well as for quantum models. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  18. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  20. Statistical differences between relative quantitative molecular fingerprints from microbial communities.

    PubMed

    Portillo, M C; Gonzalez, J M

    2008-08-01

    Molecular fingerprints of microbial communities are a common method for the analysis and comparison of environmental samples. The significance of differences between microbial community fingerprints was analyzed considering the presence of different phylotypes and their relative abundance. A method is proposed by simulating coverage of the analyzed communities as a function of sampling size applying a Cramér-von Mises statistic. Comparisons were performed by a Monte Carlo testing procedure. As an example, this procedure was used to compare several sediment samples from freshwater ponds using a relative quantitative PCR-DGGE profiling technique. The method was able to discriminate among different samples based on their molecular fingerprints, and confirmed the lack of differences between aliquots from a single sample.

  1. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less

  2. Identification of moisture content in tobacco plant leaves using outlier sample eliminating algorithms and hyperspectral data.

    PubMed

    Sun, Jun; Zhou, Xin; Wu, Xiaohong; Zhang, Xiaodong; Li, Qinglin

    2016-02-26

    Fast identification of moisture content in tobacco plant leaves plays a key role in the tobacco cultivation industry and benefits the management of tobacco plant in the farm. In order to identify moisture content of tobacco plant leaves in a fast and nondestructive way, a method involving Mahalanobis distance coupled with Monte Carlo cross validation(MD-MCCV) was proposed to eliminate outlier sample in this study. The hyperspectral data of 200 tobacco plant leaf samples of 20 moisture gradients were obtained using FieldSpc(®) 3 spectrometer. Savitzky-Golay smoothing(SG), roughness penalty smoothing(RPS), kernel smoothing(KS) and median smoothing(MS) were used to preprocess the raw spectra. In addition, Mahalanobis distance(MD), Monte Carlo cross validation(MCCV) and Mahalanobis distance coupled to Monte Carlo cross validation(MD-MCCV) were applied to select the outlier sample of the raw spectrum and four smoothing preprocessing spectra. Successive projections algorithm (SPA) was used to extract the most influential wavelengths. Multiple Linear Regression (MLR) was applied to build the prediction models based on preprocessed spectra feature in characteristic wavelengths. The results showed that the preferably four prediction model were MD-MCCV-SG (Rp(2) = 0.8401 and RMSEP = 0.1355), MD-MCCV-RPS (Rp(2) = 0.8030 and RMSEP = 0.1274), MD-MCCV-KS (Rp(2) = 0.8117 and RMSEP = 0.1433), MD-MCCV-MS (Rp(2) = 0.9132 and RMSEP = 0.1162). MD-MCCV algorithm performed best among MD algorithm, MCCV algorithm and the method without sample pretreatment algorithm in the eliminating outlier sample from 20 different moisture gradients of tobacco plant leaves and MD-MCCV can be used to eliminate outlier sample in the spectral preprocessing. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Implementation of the direct S ( α , β ) method in the KENO Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Maldonado, G. Ivan

    The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less

  4. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  5. Implementation of the direct S ( α , β ) method in the KENO Monte Carlo code

    DOE PAGES

    Hart, Shane W. D.; Maldonado, G. Ivan

    2016-11-25

    The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less

  6. A smart Monte Carlo procedure for production costing and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, C.; Stremel, J.

    1996-11-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less

  7. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less

  8. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  9. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  10. Status of the Monte Carlo library least-squares (MCLLS) approach for non-linear radiation analyzer problems

    NASA Astrophysics Data System (ADS)

    Gardner, Robin P.; Xu, Libai

    2009-10-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.

  11. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  12. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  13. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    NASA Astrophysics Data System (ADS)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, E.; Madigan, M.

    Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less

  15. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  16. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  17. Invited review article: physics and Monte Carlo techniques as relevant to cryogenic, phonon, and ionization readout of Cryogenic Dark Matter Search radiation detectors.

    PubMed

    Leman, Steven W

    2012-09-01

    This review discusses detector physics and Monte Carlo techniques for cryogenic, radiation detectors that utilize combined phonon and ionization readout. A general review of cryogenic phonon and charge transport is provided along with specific details of the Cryogenic Dark Matter Search detector instrumentation. In particular, this review covers quasidiffusive phonon transport, which includes phonon focusing, anharmonic decay, and isotope scattering. The interaction of phonons in the detector surface is discussed along with the downconversion of phonons in superconducting films. The charge transport physics include a mass tensor which results from the crystal band structure and is modeled with a Herring-Vogt transformation. Charge scattering processes involve the creation of Neganov-Luke phonons. Transition-edge-sensor (TES) simulations include a full electric circuit description and all thermal processes including Joule heating, cooling to the substrate, and thermal diffusion within the TES, the latter of which is necessary to model normal-superconducting phase separation. Relevant numerical constants are provided for these physical processes in germanium, silicon, aluminum, and tungsten. Random number sampling methods including inverse cumulative distribution function (CDF) and rejection techniques are reviewed. To improve the efficiency of charge transport modeling, an additional second order inverse CDF method is developed here along with an efficient barycentric coordinate sampling method of electric fields. Results are provided in a manner that is convenient for use in Monte Carlo and references are provided for validation of these models.

  18. Methods for Monte Carlo simulations of biomacromolecules

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2010-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473

  19. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  20. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  1. Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus

    NASA Astrophysics Data System (ADS)

    Hâkansson, P.; Mella, M.; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta

    2006-11-01

    The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Itô calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.

  2. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  3. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  4. SU-G-TeP3-14: Three-Dimensional Cluster Model in Inhomogeneous Dose Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Penagaricano, J; Narayanasamy, G

    2016-06-15

    Purpose: We aim to investigate 3D cluster formation in inhomogeneous dose distribution to search for new models predicting radiation tissue damage and further leading to new optimization paradigm for radiotherapy planning. Methods: The aggregation of higher dose in the organ at risk (OAR) than a preset threshold was chosen as the cluster whose connectivity dictates the cluster structure. Upon the selection of the dose threshold, the fractional density defined as the fraction of voxels in the organ eligible to be part of the cluster was determined according to the dose volume histogram (DVH). A Monte Carlo method was implemented tomore » establish a case pertinent to the corresponding DVH. Ones and zeros were randomly assigned to each OAR voxel with the sampling probability equal to the fractional density. Ten thousand samples were randomly generated to ensure a sufficient number of cluster sets. A recursive cluster searching algorithm was developed to analyze the cluster with various connectivity choices like 1-, 2-, and 3-connectivity. The mean size of the largest cluster (MSLC) from the Monte Carlo samples was taken to be a function of the fractional density. Various OARs from clinical plans were included in the study. Results: Intensive Monte Carlo study demonstrates the inverse relationship between the MSLC and the cluster connectivity as anticipated and the cluster size does not change with fractional density linearly regardless of the connectivity types. An initially-slow-increase to exponential growth transition of the MSLC from low to high density was observed. The cluster sizes were found to vary within a large range and are relatively independent of the OARs. Conclusion: The Monte Carlo study revealed that the cluster size could serve as a suitable index of the tissue damage (percolation cluster) and the clinical outcome of the same DVH might be potentially different.« less

  5. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  6. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  7. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  8. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  9. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  10. Kalman filter parameter estimation for a nonlinear diffusion model of epithelial cell migration using stochastic collocation and the Karhunen-Loeve expansion.

    PubMed

    Barber, Jared; Tanase, Roxana; Yotov, Ivan

    2016-06-01

    Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Optical coefficient measurements using bulk living tissue by an optical fiber puncture with FOV change

    NASA Astrophysics Data System (ADS)

    Nakazawa, Haruna; Doi, Marika; Ogawa, Emiyu; Arai, Tsunenori

    2018-02-01

    To avoid an instability of the optical coefficient measurement using sliced tissue preparation, we proposed the combination of light intensity measurement through an optical fiber puncturing into a bulk tissue varying field of view (FOV) and ray tracing calculation using Monte-Carlo method. The optical coefficients of myocardium such as absorption coefficient μa, scattering coefficient μs, and anisotropic parameter g are used in the myocardium optical propagation. Since optical coefficients obtained using thin sliced tissue could be instable because they are affected by dehydration and intracellular fluid effusion on the sample surface, variety of coefficients have been reported over individual optical differences of living samples. The proposed method which combined the experiment using the bulk tissue with ray tracing calculation were performed. In this method, a 200 μmΦ high-NA silica fiber installed in a 21G needle was punctured up to the bottom of the myocardial bulk tissue over 3 cm in thickness to measure light intensity changing the fiber-tip depth and FOV. We found that the measured attenuation coefficients decreased as the FOV increased. The ray trace calculation represented the same FOV dependence in above mentioned experimental result. We think our particular fiber punctured measurement using bulk tissue varying FOV with Inverse Monte-Carlo method might be useful to obtain the optical coefficients to avoid sample preparation instabilities.

  12. Tissue spray ionization mass spectrometry for rapid recognition of human lung squamous cell carcinoma

    NASA Astrophysics Data System (ADS)

    Wei, Yiping; Chen, Liru; Zhou, Wei; Chingin, Konstantin; Ouyang, Yongzhong; Zhu, Tenggao; Wen, Hua; Ding, Jianhua; Xu, Jianjun; Chen, Huanwen

    2015-05-01

    Tissue spray ionization mass spectrometry (TSI-MS) directly on small tissue samples has been shown to provide highly specific molecular information. In this study, we apply this method to the analysis of 38 pairs of human lung squamous cell carcinoma tissue (cancer) and adjacent normal lung tissue (normal). The main components of pulmonary surfactants, dipalmitoyl phosphatidylcholine (DPPC, m/z 757.47), phosphatidylcholine (POPC, m/z 782.52), oleoyl phosphatidylcholine (DOPC, m/z 808.49), and arachidonic acid stearoyl phosphatidylcholine (SAPC, m/z 832.43), were identified using high-resolution tandem mass spectrometry. Monte Carlo sampling partial least squares linear discriminant analysis (PLS-LDA) was used to distinguish full-mass-range mass spectra of cancer samples from the mass spectra of normal tissues. With 5 principal components and 30 - 40 Monte Carlo samplings, the accuracy of cancer identification in matched tissue samples reached 94.42%. Classification of a tissue sample required less than 1 min, which is much faster than the analysis of frozen sections. The rapid, in situ diagnosis with minimal sample consumption provided by TSI-MS is advantageous for surgeons. TSI-MS allows them to make more informed decisions during surgery.

  13. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  14. High-Throughput Computation and the Applicability of Monte Carlo Integration in Fatigue Load Estimation of Floating Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Stewart, Gordon; Lackner, Matthew

    Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, whichmore » is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.« less

  15. Dynamic Event Tree advancements and control logic improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego

    The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less

  16. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  17. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  18. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  19. Reconstruction of Human Monte Carlo Geometry from Segmented Images

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican

    2014-06-01

    Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified

  20. Attenuation correction factors for cylindrical, disc and box geometry

    NASA Astrophysics Data System (ADS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.

    2009-08-01

    In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.

  1. Method of identifying clusters representing statistical dependencies in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    Approach is first to cluster and then to compute spatial boundaries for resulting clusters. Next step is to compute, from set of Monte Carlo samples obtained from scrambled data, estimates of probabilities of obtaining at least as many points within boundaries as were actually observed in original data.

  2. Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling

    ERIC Educational Resources Information Center

    Babcock, Ben

    2011-01-01

    Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…

  3. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  4. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  5. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  6. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  7. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  8. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  9. A method of using cluster analysis to study statistical dependence in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.

  10. Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM.

    PubMed

    Mair, Patrick; Satorra, Albert; Bentler, Peter M

    2012-07-01

    This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.

  11. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  12. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  13. Application of genetic algorithms to focal mechanism determination

    NASA Astrophysics Data System (ADS)

    Kobayashi, Reiji; Nakanishi, Ichiro

    1994-04-01

    Genetic algorithms are a new class of methods for global optimization. They resemble Monte Carlo techniques, but search for solutions more efficiently than uniform Monte Carlo sampling. In the field of geophysics, genetic algorithms have recently been used to solve some non-linear inverse problems (e.g., earthquake location, waveform inversion, migration velocity estimation). We present an application of genetic algorithms to focal mechanism determination from first-motion polarities of P-waves and apply our method to two recent large events, the Kushiro-oki earthquake of January 15, 1993 and the SW Hokkaido (Japan Sea) earthquake of July 12, 1993. Initial solution and curvature information of the objective function that gradient methods need are not required in our approach. Moreover globally optimal solutions can be efficiently obtained. Calculation of polarities based on double-couple models is the most time-consuming part of the source mechanism determination. The amount of calculations required by the method designed in this study is much less than that of previous grid search methods.

  14. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  15. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  16. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  17. Adaptive Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Fasnacht, Marc

    We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.

  18. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  19. COMPARISON OF MONTE CARLO METHODS FOR NONLINEAR RADIATION TRANSPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. R. MARTIN; F. B. BROWN

    2001-03-01

    Five Monte Carlo methods for solving the nonlinear thermal radiation transport equations are compared. The methods include the well-known Implicit Monte Carlo method (IMC) developed by Fleck and Cummings, an alternative to IMC developed by Carter and Forest, an ''exact'' method recently developed by Ahrens and Larsen, and two methods recently proposed by Martin and Brown. The five Monte Carlo methods are developed and applied to the radiation transport equation in a medium assuming local thermodynamic equilibrium. Conservation of energy is derived and used to define appropriate material energy update equations for each of the methods. Details of the Montemore » Carlo implementation are presented, both for the random walk simulation and the material energy update. Simulation results for all five methods are obtained for two infinite medium test problems and a 1-D test problem, all of which have analytical solutions. Conclusions regarding the relative merits of the various schemes are presented.« less

  20. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  1. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  2. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-D chord-based transport techniques.

    PubMed

    Hunt, J G; Watchman, C J; Bolch, W E

    2007-01-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.

  3. Specific and Non-Specific Protein Association in Solution: Computation of Solvent Effects and Prediction of First-Encounter Modes for Efficient Configurational Bias Monte Carlo Simulations

    PubMed Central

    Cardone, Antonio; Pant, Harish; Hassan, Sergio A.

    2013-01-01

    Weak and ultra-weak protein-protein association play a role in molecular recognition, and can drive spontaneous self-assembly and aggregation. Such interactions are difficult to detect experimentally, and are a challenge to the force field and sampling technique. A method is proposed to identify low-population protein-protein binding modes in aqueous solution. The method is designed to identify preferential first-encounter complexes from which the final complex(es) at equilibrium evolves. A continuum model is used to represent the effects of the solvent, which accounts for short- and long-range effects of water exclusion and for liquid-structure forces at protein/liquid interfaces. These effects control the behavior of proteins in close proximity and are optimized based on binding enthalpy data and simulations. An algorithm is described to construct a biasing function for self-adaptive configurational-bias Monte Carlo of a set of interacting proteins. The function allows mixing large and local changes in the spatial distribution of proteins, thereby enhancing sampling of relevant microstates. The method is applied to three binary systems. Generalization to multiprotein complexes is discussed. PMID:24044772

  4. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  5. Evaluation and characterization of anti-estrogenic and anti-androgenic activities in soil samples along the Second Songhua River, China.

    PubMed

    Li, Jian; Wang, Yafei; Kong, Dongdong; Wang, Jinsheng; Teng, Yanguo; Li, Na

    2015-11-01

    In the present study, re-combined estrogen receptor (ER) and androgen receptor (AR) gene yeast assays combined with a novel approach based on Monte Carlo simulation were used for evaluation and characterization of soil samples collected from Jilin along the Second Songhua River to assess their antagonist/agonist properties for ER and AR. The results showed that estrogenic activity only occurred in the soil samples collected in the agriculture area, but most soil samples showed anti-estrogenic activities, and the bioassay-derived 4-hydroxytamoxifen equivalents ranged from N.D. to 23.51 μg/g. Hydrophilic substance fractions were determined as potential contributors associated with anti-estrogenic activity in these soil samples. Moreover, none of the soil samples exhibited AR agonistic potency, whereas 54% of the soil samples exhibited AR antagonistic potency. The flutamide equivalents varied between N.D. and 178.05 μg/g. Based on Monte Carlo simulation-related mass balance analysis, the AR antagonistic activities were significantly correlated with the media polar and polar fractions. All of these results support that this novel calculation method can be adopted effectively to quantify and characterize the ER/AR agonists and antagonists of the soil samples, and these data could help provide useful information for future management and remediation efforts.

  6. Advanced Computational Methods for Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  7. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  8. Modeling and simulation of the deposition/relaxation processes of polycrystalline diatomic structures of metallic nitride films

    NASA Astrophysics Data System (ADS)

    García, M. F.; Restrepo-Parra, E.; Riaño-Rojas, J. C.

    2015-05-01

    This work develops a model that mimics the growth of diatomic, polycrystalline thin films by artificially splitting the growth into deposition and relaxation processes including two stages: (1) a grain-based stochastic method (grains orientation randomly chosen) is considered and by means of the Kinetic Monte Carlo method employing a non-standard version, known as Constant Time Stepping, the deposition is simulated. The adsorption of adatoms is accepted or rejected depending on the neighborhood conditions; furthermore, the desorption process is not included in the simulation and (2) the Monte Carlo method combined with the metropolis algorithm is used to simulate the diffusion. The model was developed by accounting for parameters that determine the morphology of the film, such as the growth temperature, the interacting atomic species, the binding energy and the material crystal structure. The modeled samples exhibited an FCC structure with grain formation with orientations in the family planes of < 111 >, < 200 > and < 220 >. The grain size and film roughness were analyzed. By construction, the grain size decreased, and the roughness increased, as the growth temperature increased. Although, during the growth process of real materials, the deposition and relaxation occurs simultaneously, this method may perhaps be valid to build realistic polycrystalline samples.

  9. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  10. Estimating annual bole biomass production using uncertainty analysis

    Treesearch

    Travis J. Woolley; Mark E. Harmon; Kari B. O' Connell

    2007-01-01

    Two common sampling methodologies coupled with a simple statistical model were evaluated to determine the accuracy and precision of annual bole biomass production (BBP) and inter-annual variability estimates using this type of approach. We performed an uncertainty analysis using Monte Carlo methods in conjunction with radial growth core data from trees in three Douglas...

  11. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsoulakis, Markos

    2014-08-09

    Our two key accomplishments in the first three years were towards the development of, (1) a mathematically rigorous and at the same time computationally flexible framework for parallelization of Kinetic Monte Carlo methods, and its implementation on GPUs, and (2) spatial multilevel coarse-graining methods for Monte Carlo sampling and molecular simulation. A common underlying theme in both these lines of our work is the development of numerical methods which are at the same time both computationally efficient and reliable, the latter in the sense that they provide controlled-error approximations for coarse observables of the simulated molecular systems. Finally, our keymore » accomplishment in the last year of the grant is that we started developing (3) pathwise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of nonequilibrium extended (high-dimensional) systems. We discuss these three research directions in some detail below, along with the related publications.« less

  12. Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.

    2016-12-19

    Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less

  13. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  14. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  15. Discrete Fractional Component Monte Carlo Simulation Study of Dilute Nonionic Surfactants at the Air-Water Interface.

    PubMed

    Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J

    2017-09-26

    We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.

  16. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.

    1994-07-01

    In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.

  17. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  18. Risk assessment predictions of open dumping area after closure using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Pauzi, Nur Irfah Mohd; Radhi, Mohd Shahril Mat; Omar, Husaini

    2017-10-01

    Currently, there are many abandoned open dumping areas that were left without any proper mitigation measures. These open dumping areas could pose serious hazard to human and pollute the environment. The objective of this paper is to determine the risk assessment at the open dumping area after they has been closed using Monte Carlo Simulation method. The risk assessment exercise is conducted at the Kuala Lumpur dumping area. The rapid urbanisation of Kuala Lumpur coupled with increase in population lead to increase in waste generation. It leads to more dumping/landfill area in Kuala Lumpur. The first stage of this study involve the assessment of the dumping area and samples collections. It followed by measurement of settlement of dumping area using oedometer. The risk of the settlement is predicted using Monte Carlo simulation method. Monte Carlo simulation calculates the risk and the long-term settlement. The model simulation result shows that risk level of the Kuala Lumpur open dumping area ranges between Level III to Level IV i.e. between medium risk to high risk. These settlement (ΔH) is between 3 meters to 7 meters. Since the risk is between medium to high, it requires mitigation measures such as replacing the top waste soil with new sandy gravel soil. This will increase the strength of the soil and reduce the settlement.

  19. Computer program uses Monte Carlo techniques for statistical system performance analysis

    NASA Technical Reports Server (NTRS)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  20. The Monte Carlo Method. Popular Lectures in Mathematics.

    ERIC Educational Resources Information Center

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  1. LCG MCDB—a knowledgebase of Monte-Carlo simulated events

    NASA Astrophysics Data System (ADS)

    Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.

    2008-02-01

    In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.

  2. Replica exchange and expanded ensemble simulations as Gibbs sampling: simple improvements for enhanced mixing.

    PubMed

    Chodera, John D; Shirts, Michael R

    2011-11-21

    The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.

  3. A 3-D Coupled CFD-DSMC Solution Method With Application to the Mars Sample Return Orbiter

    NASA Technical Reports Server (NTRS)

    Glass, Christopher E.; Gnoffo, Peter A.

    2000-01-01

    A method to obtain coupled Computational Fluid Dynamics-Direct Simulation Monte Carlo (CFD-DSMC), 3-D flow field solutions for highly blunt bodies at low incidence is presented and applied to one concept of the Mars Sample Return Orbiter vehicle as a demonstration of the technique. CFD is used to solve the high-density blunt forebody flow defining an inflow boundary condition for a DSMC solution of the afterbody wake flow. By combining the two techniques in flow regions where most applicable, the entire mixed flow field is modeled in an appropriate manner.

  4. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGES

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  5. Better Than Counting: Density Profiles from Force Sampling

    NASA Astrophysics Data System (ADS)

    de las Heras, Daniel; Schmidt, Matthias

    2018-05-01

    Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.

  6. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  7. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  8. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  9. Determination of the neutron activation profile of core drill samples by gamma-ray spectrometry.

    PubMed

    Gurau, D; Boden, S; Sima, O; Stanga, D

    2018-04-01

    This paper provides guidance for determining the neutron activation profile of core drill samples taken from the biological shield of nuclear reactors using gamma spectrometry measurements. Thus, it provides guidance for selecting a model of the right form to fit data and using least squares methods for model fitting. The activity profiles of two core samples taken from the biological shield of a nuclear reactor were determined. The effective activation depth and the total activity of core samples along with their uncertainties were computed by Monte Carlo simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  11. Recent advances and future prospects for Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B

    2010-01-01

    The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codesmore » such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.« less

  12. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.

  13. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    ERIC Educational Resources Information Center

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  14. A sampling algorithm for segregation analysis

    PubMed Central

    Tier, Bruce; Henshall, John

    2001-01-01

    Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631

  15. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  16. Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.

    2013-02-01

    We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.

  17. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  18. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  19. An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Harwell, Michael

    1997-01-01

    The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)

  20. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  1. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  2. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  3. [Variable selection methods combined with local linear embedding theory used for optimization of near infrared spectral quantitative models].

    PubMed

    Hao, Yong; Sun, Xu-Dong; Yang, Qiang

    2012-12-01

    Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.

  4. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  5. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  6. Weak lensing magnification in the Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, M.; Sanchez, E.; Sevilla-Noarbe, I.; Suchyta, E.; Huff, E. M.; Gaztanaga, E.; Aleksić, J.; Ponce, R.; Castander, F. J.; Hoyle, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Jarvis, M.; Kirk, D.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; MacCrann, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Miquel, R.; Mohr, J. J.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Scarpine, V.; Schubnell, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thomas, D.; Walker, A. R.; Wester, W.; DES Collaboration

    2018-05-01

    In this paper, the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using the Dark Energy Survey Science Verification data set. This analysis is carried out for galaxies that are selected only by its photometric redshift. An extensive analysis of the systematic effects, using new methods based on simulations is performed, including a Monte Carlo sampling of the selection function of the survey.

  7. Restoring a smooth function from its noisy integrals

    NASA Astrophysics Data System (ADS)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  8. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  9. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van

    2013-01-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521

  11. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  12. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  13. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  14. Analyzing indirect secondary electron contrast of unstained bacteriophage T4 based on SEM images and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogura, Toshihiko, E-mail: t-ogura@aist.go.jp

    2009-03-06

    The indirect secondary electron contrast (ISEC) condition of the scanning electron microscopy (SEM) produces high contrast detection with minimal damage of unstained biological samples mounted under a thin carbon film. The high contrast image is created by a secondary electron signal produced under the carbon film by a low acceleration voltage. Here, we show that ISEC condition is clearly able to detect unstained bacteriophage T4 under a thin carbon film (10-15 nm) by using high-resolution field emission (FE) SEM. The results show that FE-SEM provides higher resolution than thermionic emission SEM. Furthermore, we investigated the scattered electron area within themore » carbon film under ISEC conditions using Monte Carlo simulation. The simulations indicated that the image resolution difference is related to the scattering width in the carbon film and the electron beam spot size. Using ISEC conditions on unstained virus samples would produce low electronic damage, because the electron beam does not directly irradiate the sample. In addition to the routine analysis, this method can be utilized for structural analysis of various biological samples like viruses, bacteria, and protein complexes.« less

  15. Monte carlo simulations of Yttrium reaction rates in Quinta uranium target

    NASA Astrophysics Data System (ADS)

    Suchopár, M.; Wagner, V.; Svoboda, O.; Vrzalová, J.; Chudoba, P.; Tichý, P.; Kugler, A.; Adam, J.; Závorka, L.; Baldin, A.; Furman, W.; Kadykov, M.; Khushvaktov, J.; Solnyshkin, A.; Tsoupko-Sitnikov, V.; Tyutyunnikov, S.; Bielewicz, M.; Kilim, S.; Strugalska-Gola, E.; Szuta, M.

    2017-03-01

    The international collaboration Energy and Transmutation of Radioactive Waste (E&T RAW) performed intensive studies of several simple accelerator-driven system (ADS) setups consisting of lead, uranium and graphite which were irradiated by relativistic proton and deuteron beams in the past years at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. The most recent setup called Quinta, consisting of natural uranium target-blanket and lead shielding, was irradiated by deuteron beams in the energy range between 1 and 8 GeV in three accelerator runs at JINR Nuclotron in 2011 and 2012 with yttrium samples among others inserted inside the setup to measure the neutron flux in various places. Suitable activation detectors serve as one of possible tools for monitoring of proton and deuteron beams and for measurements of neutron field distribution in ADS studies. Yttrium is one of such suitable materials for monitoring of high energy neutrons. Various threshold reactions can be observed in yttrium samples. The yields of isotopes produced in the samples were determined using the activation method. Monte Carlo simulations of the reaction rates leading to production of different isotopes were performed in the MCNPX transport code and compared with the experimental results obtained from the yttrium samples.

  16. Isotopic composition analysis and age dating of uranium samples by high resolution gamma ray spectrometry

    NASA Astrophysics Data System (ADS)

    Apostol, A. I.; Pantelica, A.; Sima, O.; Fugaru, V.

    2016-09-01

    Non-destructive methods were applied to determine the isotopic composition and the time elapsed since last chemical purification of nine uranium samples. The applied methods are based on measuring gamma and X radiations of uranium samples by high resolution low energy gamma spectrometric system with planar high purity germanium detector and low background gamma spectrometric system with coaxial high purity germanium detector. The ;Multigroup γ-ray Analysis Method for Uranium; (MGAU) code was used for the precise determination of samples' isotopic composition. The age of the samples was determined from the isotopic ratio 214Bi/234U. This ratio was calculated from the analyzed spectra of each uranium sample, using relative detection efficiency. Special attention is paid to the coincidence summing corrections that have to be taken into account when performing this type of analysis. In addition, an alternative approach for the age determination using full energy peak efficiencies obtained by Monte Carlo simulations with the GESPECOR code is described.

  17. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  18. The Multiple-Minima Problem in Protein Folding

    NASA Astrophysics Data System (ADS)

    Scheraga, Harold A.

    1991-10-01

    The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.

  19. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  20. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  1. Molecular detection of Toxoplasma gondii in water samples from Scotland and a comparison between the 529bp real-time PCR and ITS1 nested PCR.

    PubMed

    Wells, Beth; Shaw, Hannah; Innocent, Giles; Guido, Stefano; Hotchkiss, Emily; Parigi, Maria; Opsteegh, Marieke; Green, James; Gillespie, Simon; Innes, Elisabeth A; Katzer, Frank

    2015-12-15

    Waterborne transmission of Toxoplasma gondii is a potential public health risk and there are currently no agreed optimised methods for the recovery, processing and detection of T. gondii oocysts in water samples. In this study modified methods of T. gondii oocyst recovery and DNA extraction were applied to 1427 samples collected from 147 public water supplies throughout Scotland. T. gondii DNA was detected, using real time PCR (qPCR) targeting the 529bp repeat element, in 8.79% of interpretable samples (124 out of 1411 samples). The samples which were positive for T. gondii DNA originated from a third of the sampled water sources. The samples which were positive by qPCR and some of the negative samples were reanalysed using ITS1 nested PCR (nPCR) and results compared. The 529bp qPCR was the more sensitive technique and a full analysis of assay performance, by Bayesian analysis using a Markov Chain Monte Carlo method, was completed which demonstrated the efficacy of this method for the detection of T. gondii in water samples. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Hyper-Parallel Tempering Monte Carlo Method and It's Applications

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; de Pablo, Juan

    2000-03-01

    A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.

  3. Physical Principle for Generation of Randomness

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2009-01-01

    A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)

  4. HepSim: A repository with predictions for high-energy physics experiments

    DOE PAGES

    Chekanov, S. V.

    2015-02-03

    A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.

  5. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  6. Continuous-time quantum Monte Carlo impurity solvers

    NASA Astrophysics Data System (ADS)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  7. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  8. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    NASA Astrophysics Data System (ADS)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p < 0.05) between MMRG obtained with IF extracted by BICA with respect to IF extracted from measured images corrupted with spillover.

  9. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  10. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  11. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    NASA Technical Reports Server (NTRS)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  12. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    NASA Astrophysics Data System (ADS)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  13. First Monte Carlo Global Analysis of Nucleon Transversity with Lattice QCD Constraints

    DOE PAGES

    Lin, Huey-Wen; Melnitchouk, Wally; Prokudin, Alexei; ...

    2018-04-11

    We report on the first global QCD analysis of the quark transversity distributions in the nucleon from semi-inclusive deep-inelastic scattering (SIDIS), using a new Monte Carlo method based on nested sampling and constraints on the isovector tensor chargemore » $$g_T$$ from lattice QCD. A simultaneous fit to the available SIDIS Collins asymmetry data is compatible with $$g_T$$ values extracted from a comprehensive reanalysis of existing lattice simulations, in contrast to previous analyses, which found significantly smaller $$g_T$$ values. The contributions to the nucleon tensor charge from $u$ and $d$ quarks are found to be $$\\delta u = 0.3(2)$$ and $$\\delta d = -0.7(2)$$ at a scale $Q^2 = 2$ GeV$^2$.« less

  14. Continuous-time quantum Monte Carlo calculation of multiorbital vertex asymptotics

    NASA Astrophysics Data System (ADS)

    Kaufmann, Josef; Gunacker, Patrik; Held, Karsten

    2017-07-01

    We derive the equations for calculating the high-frequency asymptotics of the local two-particle vertex function for a multiorbital impurity model. These relate the asymptotics for a general local interaction to equal-time two-particle Green's functions, which we sample using continuous-time quantum Monte Carlo simulations with a worm algorithm. As specific examples we study the single-orbital Hubbard model and the three t2 g orbitals of SrVO3 within dynamical mean-field theory (DMFT). We demonstrate how the knowledge of the high-frequency asymptotics reduces the statistical uncertainties of the vertex and further eliminates finite-box-size effects. The proposed method benefits the calculation of nonlocal susceptibilities in DMFT and diagrammatic extensions of DMFT.

  15. Optimization and benchmarking of a perturbative Metropolis Monte Carlo quantum mechanics/molecular mechanics program

    NASA Astrophysics Data System (ADS)

    Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.

    2017-12-01

    In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.

  16. First Monte Carlo Global Analysis of Nucleon Transversity with Lattice QCD Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Huey-Wen; Melnitchouk, Wally; Prokudin, Alexei

    We report on the first global QCD analysis of the quark transversity distributions in the nucleon from semi-inclusive deep-inelastic scattering (SIDIS), using a new Monte Carlo method based on nested sampling and constraints on the isovector tensor chargemore » $$g_T$$ from lattice QCD. A simultaneous fit to the available SIDIS Collins asymmetry data is compatible with $$g_T$$ values extracted from a comprehensive reanalysis of existing lattice simulations, in contrast to previous analyses, which found significantly smaller $$g_T$$ values. The contributions to the nucleon tensor charge from $u$ and $d$ quarks are found to be $$\\delta u = 0.3(2)$$ and $$\\delta d = -0.7(2)$$ at a scale $Q^2 = 2$ GeV$^2$.« less

  17. First Monte Carlo Global Analysis of Nucleon Transversity with Lattice QCD Constraints.

    PubMed

    Lin, H-W; Melnitchouk, W; Prokudin, A; Sato, N; Shows, H

    2018-04-13

    We report on the first global QCD analysis of the quark transversity distributions in the nucleon from semi-inclusive deep-inelastic scattering (SIDIS), using a new Monte Carlo method based on nested sampling and constraints on the isovector tensor charge g_{T} from lattice QCD. A simultaneous fit to the available SIDIS Collins asymmetry data is compatible with g_{T} values extracted from a comprehensive reanalysis of existing lattice simulations, in contrast to previous analyses, which found significantly smaller g_{T} values. The contributions to the nucleon tensor charge from u and d quarks are found to be δu=0.3(2) and δd=-0.7(2) at a scale Q^{2}=2  GeV^{2}.

  18. First Monte Carlo Global Analysis of Nucleon Transversity with Lattice QCD Constraints

    NASA Astrophysics Data System (ADS)

    Lin, H.-W.; Melnitchouk, W.; Prokudin, A.; Sato, N.; Shows, H.; Jefferson Lab Angular Momentum JAM Collaboration

    2018-04-01

    We report on the first global QCD analysis of the quark transversity distributions in the nucleon from semi-inclusive deep-inelastic scattering (SIDIS), using a new Monte Carlo method based on nested sampling and constraints on the isovector tensor charge gT from lattice QCD. A simultaneous fit to the available SIDIS Collins asymmetry data is compatible with gT values extracted from a comprehensive reanalysis of existing lattice simulations, in contrast to previous analyses, which found significantly smaller gT values. The contributions to the nucleon tensor charge from u and d quarks are found to be δ u =0.3 (2 ) and δ d =-0.7 (2 ) at a scale Q2=2 GeV2.

  19. Optimization and benchmarking of a perturbative Metropolis Monte Carlo quantum mechanics/molecular mechanics program.

    PubMed

    Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A

    2017-12-28

    In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.

  20. Quantum Gibbs ensemble Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  1. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    USGS Publications Warehouse

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  2. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  3. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    NASA Astrophysics Data System (ADS)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples distributed with the code, it ranges from less than 1 s to a few minutes.

  4. Radiative transfer modelling inside thermal protection system using hybrid homogenization method for a backward Monte Carlo method coupled with Mie theory

    NASA Astrophysics Data System (ADS)

    Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.

    2012-06-01

    A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.

  5. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  6. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  7. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    NASA Astrophysics Data System (ADS)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  8. Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Roberti, Laura

    1998-01-01

    The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.

  9. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  10. Development of volume equations using data obtained by upper stem dendrometry with Monte Carlo integration: preliminary results for eastern redcedar

    Treesearch

    Thomas B. Lynch; Rodney E. Will; Rider Reynolds

    2013-01-01

    Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...

  11. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  12. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  13. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.

  14. Exploring cluster Monte Carlo updates with Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  15. Synergies from using higher order symplectic decompositions both for ordinary differential equations and quantum Monte Carlo methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matuttis, Hans-Georg; Wang, Xiaoxing

    Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.

  16. Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Alsafi, Huseen; Peninngton, Gray

    Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.

  17. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward infinity, while the Hill sphere method results in a severely underestimated probability. We provide a discussion of the reasons for these differences, and we finally present the results of the MOID method in the form of probability maps for the Earth and Mars on their current orbits. These maps show a relatively flat probability distribution, except for the occurrence of two ridges found at small inclinations and for coinciding projectile/target perihelion distances. Conclusions: Our results verify the standard formulae in the general case, away from the singularities. In fact, severe shortcomings are limited to the immediate vicinity of those extreme orbits. On the other hand, the new Monte Carlo methods can be used without excessive consumption of computer time, and the MOID method avoids the problems associated with the other methods. Appendices are available in electronic form at http://www.aanda.org

  18. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  19. Comparison of the Results of MISSE 6 Atomic Oxygen Erosion Yields of Layered Kapton H Films with Monte Carlo Computational Predictions

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Groh, Kim De; Kneubel, Christian A.

    2014-01-01

    A space experiment flown as part of the Materials International Space Station Experiment 6B (MISSE 6B) was designed to compare the atomic oxygen erosion yield (Ey) of layers of Kapton H polyimide with no spacers between layers with that of layers of Kapton H with spacers between layers. The results were compared to a solid Kapton H (DuPont, Wilmington, DE) sample. Monte Carlo computational modeling was performed to optimize atomic oxygen interaction parameter values to match the results of both the MISSE 6B multilayer experiment and the undercut erosion profile from a crack defect in an aluminized Kapton H sample flown on the Long Duration Exposure Facility (LDEF). The Monte Carlo modeling produced credible agreement with space results of increased Ey for all samples with spacers as well as predicting the space-observed enhancement in erosion near the edges of samples due to scattering from the beveled edges of the sample holders.

  20. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  1. Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kube, Susanna; Lasser, Caroline; Weber, Marcus

    2009-04-01

    The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.

  2. Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region

    NASA Astrophysics Data System (ADS)

    Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

    2003-06-01

    The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

  3. Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.

    PubMed

    Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji

    2003-06-01

    The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.

  4. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  5. The Multiplicity of Massive Stars: A High Angular Resolution Survey With The HST Fine Guidance Sensor

    DTIC Science & Technology

    2015-01-01

    al. (2014), and of the Large Magellanic Cloud (LMC) Tarantula Nebula region by Sana et al. (2013b), demonstrate that the binary frequency may be »70...Monte-Carlo method to fit spectroscopic results for a large sample of O-type stars in the Tarantula Nebula region of the LMC, and they find a best fit

  6. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  7. Determination of effective atomic number of biomedical samples using Gamma ray back-scattering

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Singh, Bhajan; Sandhu, B. S.; Sabharwal, Arvind D.

    2018-05-01

    The study of effective atomic number of biomedical sample has been carried out by using a non-destructive multiple back-scattering technique. Also radiation characterization method is used to compare the tissue equivalent material as human tissue. Response function of 3″ × 3″ NaI(Tl) scintillation detector is implemented on recorded pulse-height distribution to boost the counts under the photo-peak and help to reduce the uncertainty in the experimental result. Monte Carlo calculation for multiple back-scattered events supports the reported experimental work.

  8. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  9. Monte Carlo replica-exchange based ensemble docking of protein conformations.

    PubMed

    Zhang, Zhe; Ehmann, Uwe; Zacharias, Martin

    2017-05-01

    A replica-exchange Monte Carlo (REMC) ensemble docking approach has been developed that allows efficient exploration of protein-protein docking geometries. In addition to Monte Carlo steps in translation and orientation of binding partners, possible conformational changes upon binding are included based on Monte Carlo selection of protein conformations stored as ordered pregenerated conformational ensembles. The conformational ensembles of each binding partner protein were generated by three different approaches starting from the unbound partner protein structure with a range spanning a root mean square deviation of 1-2.5 Å with respect to the unbound structure. Because MC sampling is performed to select appropriate partner conformations on the fly the approach is not limited by the number of conformations in the ensemble compared to ensemble docking of each conformer pair in ensemble cross docking. Although only a fraction of generated conformers was in closer agreement with the bound structure the REMC ensemble docking approach achieved improved docking results compared to REMC docking with only the unbound partner structures or using docking energy minimization methods. The approach has significant potential for further improvement in combination with more realistic structural ensembles and better docking scoring functions. Proteins 2017; 85:924-937. © 2016 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  11. Simultaneous determination of fluoroquinolones in environmental water by liquid chromatography-tandem mass spectrometry with direct injection: A green approach.

    PubMed

    Denadai, Marina; Cass, Quezia Bezerra

    2015-10-30

    This work describes an on-line multi-residue method for simultaneous quantification of ciprofloxacin, enrofloxacin, gemifloxacin, moxifloxacin, norfloxacin and ofloxacin in superficial and wastewater samples. For that, an octyl restricted-access media bovine serum albumin column (RAM-BSA C8) was used for sample clean-up, enrichment and analysis with quantitation carried out by tandem mass spectrometry. For water samples volumes of only 500μL the method provided good selectivity, extraction efficiency, accuracy, and precision with quantification limits in the order of 20-150ngL(-1). Out of the six fluoroquinolones only ciprofloxacin (195ngL(-1)) and norfloxacin (270ngL(-1)) were quantified in an influent sample of the wastewater treatment plant (WWTP) of São Carlos (SP, Brazil). None were found in the superficial water samples analyzed. The capability of injecting native sample in an automated mode provides high productivity and represents a greener approach in environmental sample analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Semiclassical Monte Carlo: A first principles approach to non-adiabatic molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alexander J.; Center for Nonlinear Studies; Gorshkov, Vyacheslav N.

    2014-11-14

    Modeling the dynamics of photophysical and (photo)chemical reactions in extended molecular systems is a new frontier for quantum chemistry. Many dynamical phenomena, such as intersystem crossing, non-radiative relaxation, and charge and energy transfer, require a non-adiabatic description which incorporate transitions between electronic states. Additionally, these dynamics are often highly sensitive to quantum coherences and interference effects. Several methods exist to simulate non-adiabatic dynamics; however, they are typically either too expensive to be applied to large molecular systems (10's-100's of atoms), or they are based on ad hoc schemes which may include severe approximations due to inconsistencies in classical and quantummore » mechanics. We present, in detail, an algorithm based on Monte Carlo sampling of the semiclassical time-dependent wavefunction that involves running simple surface hopping dynamics, followed by a post-processing step which adds little cost. The method requires only a few quantities from quantum chemistry calculations, can systematically be improved, and provides excellent agreement with exact quantum mechanical results. Here we show excellent agreement with exact solutions for scattering results of standard test problems. Additionally, we find that convergence of the wavefunction is controlled by complex valued phase factors, the size of the non-adiabatic coupling region, and the choice of sampling function. These results help in determining the range of applicability of the method, and provide a starting point for further improvement.« less

  13. Monte Carlo simulations of backscattering process in dislocation-containing SrTiO3 single crystal

    NASA Astrophysics Data System (ADS)

    Jozwik, P.; Sathish, N.; Nowicki, L.; Jagielski, J.; Turos, A.; Kovarik, L.; Arey, B.

    2014-05-01

    Studies of defects formation in crystals are of obvious importance in electronics, nuclear engineering and other disciplines where materials are exposed to different forms of irradiation. Rutherford Backscattering/Channeling (RBS/C) and Monte Carlo (MC) simulations are the most convenient tool for this purpose, as they allow one to determine several features of lattice defects: their type, concentration and damage accumulation kinetic. On the other hand various irradiation conditions can be efficiently modeled by ion irradiation method without leading to the radioactivity of the sample. Combination of ion irradiation with channeling experiment and MC simulations appears thus as a most versatile method in studies of radiation damage in materials. The paper presents the results on such a study performed on SrTiO3 (STO) single crystals irradiated with 320 keV Ar ions. The samples were analyzed also by using HRTEM as a complementary method which enables the measurement of geometrical parameters of crystal lattice deformation in the vicinity of dislocations. Once the parameters and their variations within the distance of several lattice constants from the dislocation core are known, they may be used in MC simulations for the quantitative determination of dislocation depth distribution profiles. The final outcome of the deconvolution procedure are cross-sections values calculated for two types of defects observed (RDA and dislocations).

  14. Accurate condensed history Monte Carlo simulation of electron transport. II. Application to ion chamber response simulations.

    PubMed

    Kawrakow, I

    2000-03-01

    In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.

  15. A novel quality by design approach for developing an HPLC method to analyze herbal extracts: A case study of sugar content analysis.

    PubMed

    Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu

    2018-01-01

    The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.

  16. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  17. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  18. Monte Carlo method for calculating the radiation skyshine produced by electron accelerators

    NASA Astrophysics Data System (ADS)

    Kong, Chaocheng; Li, Quanfeng; Chen, Huaibi; Du, Taibin; Cheng, Cheng; Tang, Chuanxiang; Zhu, Li; Zhang, Hui; Pei, Zhigang; Ming, Shenjin

    2005-06-01

    Using the MCNP4C Monte Carlo code, the X-ray skyshine produced by 9 MeV, 15 MeV and 21 MeV electron linear accelerators were calculated respectively with a new two-step method combined with the split and roulette variance reduction technique. Results of the Monte Carlo simulation, the empirical formulas used for skyshine calculation and the dose measurements were analyzed and compared. In conclusion, the skyshine dose measurements agreed reasonably with the results computed by the Monte Carlo method, but deviated from computational results given by empirical formulas. The effect on skyshine dose caused by different structures of accelerator head is also discussed in this paper.

  19. Response Matrix Monte Carlo for electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.

    1990-11-01

    A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less

  20. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  1. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less

  2. Improved Monte Carlo Renormalization Group Method

    DOE R&D Accomplishments Database

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  3. Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference

    PubMed Central

    Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.

    2016-01-01

    Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243

  4. MO-FG-BRA-01: 4D Monte Carlo Simulations for Verification of Dose Delivered to a Moving Anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gholampourkashi, S; Cygler, J E.; The Ottawa Hospital Cancer Centre, Ottawa, ON

    Purpose: To validate 4D Monte Carlo (MC) simulations of dose delivery by an Elekta Agility linear accelerator to a moving phantom. Methods: Monte Carlo simulations were performed using the 4DdefDOSXYZnrc/EGSnrc user code which samples a new geometry for each incident particle and calculates the dose in a continuously moving anatomy. A Quasar respiratory motion phantom with a lung insert containing a 3 cm diameter tumor was used for dose measurements on an Elekta Agility linac with the phantom in stationary and moving states. Dose to the center of tumor was measured using calibrated EBT3 film and the RADPOS 4D dosimetrymore » system. A VMAT plan covering the tumor was created on the static CT scan of the phantom using Monaco V.5.10.02. A validated BEAMnrc model of our Elekta Agility linac was used for Monte Carlo simulations on stationary and moving anatomies. To compare the planned and delivered doses, linac log files recorded during measurements were used for the simulations. For 4D simulations, deformation vectors that modeled the rigid translation of the lung insert were generated as input to the 4DdefDOSXYZnrc code as well as the phantom motion trace recorded with RADPOS during the measurements. Results: Monte Carlo simulations and film measurements were found to agree within 2mm/2% for 97.7% of points in the film in the static phantom and 95.5% in the moving phantom. Dose values based on film and RADPOS measurements are within 2% of each other and within 2σ of experimental uncertainties with respect to simulations. Conclusion: Our 4D Monte Carlo simulation using the defDOSXYZnrc code accurately calculates dose delivered to a moving anatomy. Future work will focus on more investigation of VMAT delivery on a moving phantom to improve the agreement between simulation and measurements, as well as establishing the accuracy of our method in a deforming anatomy. This work was supported by the Ontario Consortium of Adaptive Interventions in Radiation Oncology (OCAIRO), funded by the Ontario Research Fund Research Excellence program.« less

  5. An Energy-Dispersive X-Ray Fluorescence Spectrometry and Monte Carlo simulation study of Iron-Age Nuragic small bronzes ("Navicelle") from Sardinia, Italy

    NASA Astrophysics Data System (ADS)

    Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio

    2016-09-01

    A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.

  6. A microstructural lattice model for strain oriented problems: A combined Monte Carlo finite element technique

    NASA Technical Reports Server (NTRS)

    Gayda, J.; Srolovitz, D. J.

    1987-01-01

    A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, was developed which simulates microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. In this approach, the microstructure is discretized onto a fine lattice. Each element in the lattice is labelled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis was validated by comparing this approach with a closed form, analytical method for stress assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analytical for multiparticle problems were also run and in general the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperature.

  7. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  8. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  9. Accurate Exchange-Correlation Energies for the Warm Dense Electron Gas.

    PubMed

    Malone, Fionn D; Blunt, N S; Brown, Ethan W; Lee, D K K; Spencer, J S; Foulkes, W M C; Shepherd, James J

    2016-09-09

    The density matrix quantum Monte Carlo (DMQMC) method is used to sample exact-on-average N-body density matrices for uniform electron gas systems of up to 10^{124} matrix elements via a stochastic solution of the Bloch equation. The results of these calculations resolve a current debate over the accuracy of the data used to parametrize finite-temperature density functionals. Exchange-correlation energies calculated using the real-space restricted path-integral formalism and the k-space configuration path-integral formalism disagree by up to ∼10% at certain reduced temperatures T/T_{F}≤0.5 and densities r_{s}≤1. Our calculations confirm the accuracy of the configuration path-integral Monte Carlo results available at high density and bridge the gap to lower densities, providing trustworthy data in the regime typical of planetary interiors and solids subject to laser irradiation. We demonstrate that the DMQMC method can calculate free energies directly and present exact free energies for T/T_{F}≥1 and r_{s}≤2.

  10. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  11. A new method for evaluating radon and thoron alpha-activities per unit volume inside and outside various natural material samples by calculating SSNTD detection efficiencies for the emitted alpha-particles and measuring the resulting track densities.

    PubMed

    Misdaq, M A; Aitnouh, F; Khajmi, H; Ezzahery, H; Berrazzouk, S

    2001-08-01

    A Monte Carlo computer code for determining detection efficiencies of the CR-39 and LR-115 II solid-state nuclear track detectors (SSNTD) for alpha-particles emitted by the uranium and thorium series inside different natural material samples was developed. The influence of the alpha-particle initial energy on the SSNTD detection efficiencies was investigated. Radon (222Rn) and thoron (220Rn) alpha-activities per unit volume were evaluated inside and outside the natural material samples by exploiting data obtained for the detection efficiencies of the SSNTD utilized for the emitted alpha-particles, and measuring the resulting track densities. Results obtained were compared to those obtained by other methods. Radon emanation coefficients have been determined for some of the considered material samples.

  12. NUEN-618 Class Project: Actually Implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, R. M.; Brunner, T. A.

    2017-12-14

    This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.

  13. Nondestructive Method For Measuring The Scattering Coefficient Of Bulk Material

    NASA Astrophysics Data System (ADS)

    Groenhuis, R. A. J.; ten Bosch, J. J.

    1981-05-01

    During demineralization and remineralization of dental enamel its structure changes resulting in a change of the absorption and scattering coefficients of the enamel. By measuring these coefficients during demineralization and remineralization these processes can be monitored in a non-destructive way. For this purpose an experimental arrangement was made: a fibre illuminates a spot on the sample with monochromatic light with a wave-length between 400 nm and 700 nm; a photomultiplier measures the luminance of the light back-scattered by the sample as a function of the distance from the measuring snot to the spot of illumination. In a Monte Carlo-model this luminance is simulated using the same geometry given the scattering and absorption coefficients in a sample. Then the scattering and absorption coefficients in the sample are determined by selecting the theoretical curve fitting the experimental one. Scattering coefficients below 10 mm-1 and absorption coefficients obtained with this method on calibration samples correspond well with those obtained with another method. Scattering coefficients above 10 mm-1 (paper samples) were measured ton low. This perhaps is caused by the anisotropic structure of paper sheets. The method is very suitable to measure the scattering and absorption coefficients of bulk materials.

  14. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  15. Numerical integration of detector response functions via Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  16. NOTE: Monte Carlo evaluation of kerma in an HDR brachytherapy bunker

    NASA Astrophysics Data System (ADS)

    Pérez-Calatayud, J.; Granero, D.; Ballester, F.; Casal, E.; Crispin, V.; Puchades, V.; León, A.; Verdú, G.

    2004-12-01

    In recent years, the use of high dose rate (HDR) after-loader machines has greatly increased due to the shift from traditional Cs-137/Ir-192 low dose rate (LDR) to HDR brachytherapy. The method used to calculate the required concrete and, where appropriate, lead shielding in the door is based on analytical methods provided by documents published by the ICRP, the IAEA and the NCRP. The purpose of this study is to perform a more realistic kerma evaluation at the entrance maze door of an HDR bunker using the Monte Carlo code GEANT4. The Monte Carlo results were validated experimentally. The spectrum at the maze entrance door, obtained with Monte Carlo, has an average energy of about 110 keV, maintaining a similar value along the length of the maze. The comparison of results from the aforementioned values with the Monte Carlo ones shows that results obtained using the albedo coefficient from the ICRP document more closely match those given by the Monte Carlo method, although the maximum value given by MC calculations is 30% greater.

  17. Numerical integration of detector response functions via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.

    2017-09-01

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  18. Numerical integration of detector response functions via Monte Carlo simulations

    DOE PAGES

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...

    2017-06-13

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  19. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  20. Reliability of a Measure of Institutional Discrimination against Minorities

    DTIC Science & Technology

    1979-12-01

    samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small

  1. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  2. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  3. Ground state of excitonic molecules by the Green's-function Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.A.; Vashishta, P.; Kalia, R.K.

    1983-12-26

    The ground-state energy of excitonic molecules is evaluated as a function of the ratio of electron and hole masses, sigma, with use of the Green's-function Monte Carlo method. For all sigma, the Green's-function Monte Carlo energies are significantly lower than the variational estimates and in favorable agreement with experiments. In excitonic rydbergs, the binding energy of the positronium molecule (sigma = 1) is predicted to be -0.06 and for sigma<<1, the Green's-function Monte Carlo energies agree with the ''exact'' limiting behavior, E = -2.346+0.764sigma.

  4. Qualification of heavy water based irradiation device in the JSI TRIGA reactor for irradiations of FT-TIMS samples for nuclear safeguards

    NASA Astrophysics Data System (ADS)

    Radulović, Vladimir; Kolšek, Aljaž; Fauré, Anne-Laure; Pottin, Anne-Claire; Pointurier, Fabien; Snoj, Luka

    2018-03-01

    The Fission Track Thermal Ionization Mass Spectrometry (FT-TIMS) method is considered as the reference method for particle analysis in the field of nuclear Safeguards for measurements of isotopic compositions (fissile material enrichment levels) in micrometer-sized uranium particles collected in nuclear facilities. An integral phase in the method is the irradiation of samples in a very well thermalized neutron spectrum. A bilateral collaboration project was carried out between the Jožef Stefan Institute (JSI, Slovenia) and the Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA, France) to determine whether the JSI TRIGA reactor could be used for irradiations of samples for the FT-TIMS method. This paper describes Monte Carlo simulations, experimental activation measurements and test irradiations performed in the JSI TRIGA reactor, firstly to determine the feasibility, and secondly to design and qualify a purpose-built heavy water based irradiation device for FT-TIMS samples. The final device design has been shown experimentally to meet all the required performance specifications.

  5. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  6. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  7. Accurate quantification of fluorescent targets within turbid media based on a decoupled fluorescence Monte Carlo model.

    PubMed

    Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming

    2015-07-01

    We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.

  8. The effect of pathological processes on absorption and scattering spectra of samples of bile and pancreatic juice

    NASA Astrophysics Data System (ADS)

    Giraev, K. M.; Ashurbekov, N. A.; Magomedov, M. A.; Murtazaeva, A. A.; Medzhidov, R. T.

    2015-07-01

    Spectra of optical transmission coefficients and optical reflectance for bile and pancreatic juice samples were measured experimentally for different forms of pathologies of the pancreas within the range of 250-2500 nm. The absorption and scattering spectra, as well as the spectrum of the anisotropy factor of scattering, were determined based on the results obtained using the reverse Monte Carlo method. The surface morphology for the corresponding samples of the biological media was studied employing electron microscopy. The dynamics of the optical properties of the biological media was determined depending on the stage of the pathology. It has been demonstrated that the results of the study presented are in a good agreement with pathophysiological data and could supplement and broaden the results of conventional methods for diagnostics of the pancreas.

  9. Estimation of Gutenberg-Richter b-value using instrumental earthquake catalog from the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Lee, H.; Sheen, D.; Kim, S.

    2013-12-01

    The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.

  10. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  11. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  12. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  13. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  14. Analytic nuclear forces and molecular properties from full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Robert E.; Overy, Catherine; Opalka, Daniel

    Unbiased stochastic sampling of the one- and two-body reduced density matrices is achieved in full configuration interaction quantum Monte Carlo with the introduction of a second, “replica” ensemble of walkers, whose population evolves in imaginary time independently from the first and which entails only modest additional computational overheads. The matrices obtained from this approach are shown to be representative of full configuration-interaction quality and hence provide a realistic opportunity to achieve high-quality results for a range of properties whose operators do not necessarily commute with the Hamiltonian. A density-matrix formulated quasi-variational energy estimator having been already proposed and investigated, themore » present work extends the scope of the theory to take in studies of analytic nuclear forces, molecular dipole moments, and polarisabilities, with extensive comparison to exact results where possible. These new results confirm the suitability of the sampling technique and, where sufficiently large basis sets are available, achieve close agreement with experimental values, expanding the scope of the method to new areas of investigation.« less

  15. Quantitative characterization of the atomic-scale structure of oxyhydroxides in rusts formed on steel surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saito, M.; Suzuki, S.; Kimura, M.

    Quantitative X-ray structural analysis coupled with anomalous X-ray scattering has been used for characterizing the atomic-scale structure of rust formed on steel surfaces. Samples were prepared from rust layers formed on the surfaces of two commercial steels. X-ray scattered intensity profiles of the two samples showed that the rusts consisted mainly of two types of ferric oxyhydroxide, {alpha}-FeOOH and {gamma}-FeOOH. The amounts of these rust components and the realistic atomic arrangements in the components were estimated by fitting both the ordinary and the environmental interference functions with a model structure calculated using the reverse Monte Carlo simulation technique. The twomore » rust components were found to be the network structure formed by FeO{sub 6} octahedral units, the network structure itself deviating from the ideal case. The present results also suggest that the structural analysis method using anomalous X-ray scattering and the reverse Monte Carlo technique is very successful in determining the atomic-scale structure of rusts formed on the steel surfaces.« less

  16. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  17. Protein-ligand docking using FFT based sampling: D3R case study.

    PubMed

    Padhorny, Dzmitry; Hall, David R; Mirzaei, Hanieh; Mamonov, Artem B; Moghadasi, Mohammad; Alekseenko, Andrey; Beglov, Dmitri; Kozakov, Dima

    2018-01-01

    Fast Fourier transform (FFT) based approaches have been successful in application to modeling of relatively rigid protein-protein complexes. Recently, we have been able to adapt the FFT methodology to treatment of flexible protein-peptide interactions. Here, we report our latest attempt to expand the capabilities of the FFT approach to treatment of flexible protein-ligand interactions in application to the D3R PL-2016-1 challenge. Based on the D3R assessment, our FFT approach in conjunction with Monte Carlo minimization off-grid refinement was among the top performing methods in the challenge. The potential advantage of our method is its ability to globally sample the protein-ligand interaction landscape, which will be explored in further applications.

  18. Surface entropy of liquids via a direct Monte Carlo approach - Application to liquid Si

    NASA Technical Reports Server (NTRS)

    Wang, Z. Q.; Stroud, D.

    1990-01-01

    Two methods are presented for a direct Monte Carlo evaluation of the surface entropy S(s) of a liquid interacting by specified, volume-independent potentials. The first method is based on an application of the approach of Ferrenberg and Swendsen (1988, 1989) to Monte Carlo simulations at two different temperatures; it gives much more reliable results for S(s) in liquid Si than previous calculations based on numerical differentiation. The second method expresses the surface entropy directly as a canonical average at fixed temperature.

  19. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  20. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Beard, Bernard B.

    2010-01-01

    This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.

  1. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  2. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  3. Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Broecker, Peter; Trebst, Simon

    2016-08-01

    Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.

  4. Simulation-Based Model Checking for Nondeterministic Systems and Rare Events

    DTIC Science & Technology

    2016-03-24

    year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to

  5. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  6. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  7. Quantitative basis for component factors of gas flow proportional counting efficiencies

    NASA Astrophysics Data System (ADS)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  8. Contact interaction in an unitary ultracold Fermi gas

    DOE PAGES

    Pessoa, Renato; Gandolfi, Stefano; Vitiello, S. A.; ...

    2015-12-16

    An ultracold Fermi atomic gas at unitarity presents universal properties that in the dilute limit can be well described by a contact interaction. By employing a guiding function with correct boundary conditions and making simple modifications to the sampling procedure we are able to calculate the properties of a true contact interaction with the diffusion Monte Carlo method. The results are obtained with small variances. Our calculations for the Bertsch and contact parameters are in excellent agreement with published experiments. The possibility of using a more faithful description of ultracold atomic gases can help uncover additional features of ultracold atomicmore » gases. In addition, this work paves the way to perform quantum Monte Carlo calculations for other systems interacting with contact interactions, where the description using potentials with finite effective range might not be accurate.« less

  9. The MCNP Simulation of a PGNAA System at TRR-1/M1

    NASA Astrophysics Data System (ADS)

    Sangaroon, S.; Ratanatongchai, W.; Picha, R.; Khaweerat, S.; Channuie, J.

    2017-06-01

    The prompt-gamma neutron activation analysis system (PGNAA) has been installed at Thai Research Reactor-1/Modified 1 (TRR-1/M1) since 1999. The purpose of the system is for elemental and isotopic analyses. The system mainly consists of a series of the moderator and collimator, neutron and gamma-ray shielding and the HPGe detector. In this work, the condition of the system is carried out based on the Monte Carlo method using Monte Carlo N-Particle transport code and the experiment. The flux ratios (Φthermal/Φepithermal and Φthermal/Φfast) and thermal neutron flux have been obtained. The simulated prompt gamma rays of the Portland cement sample have been carried out. The simulation provides significant contribution in upgrading the PGNAA station to be available in various applications.

  10. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    NASA Astrophysics Data System (ADS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  11. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  12. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    PubMed

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  13. Study of the Transition Flow Regime using Monte Carlo Methods

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  14. Simulation of Nuclear Reactor Kinetics by the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gomin, E. A.; Davidenko, V. D.; Zinchenko, A. S.; Kharchenko, I. K.

    2017-12-01

    The KIR computer code intended for calculations of nuclear reactor kinetics using the Monte Carlo method is described. The algorithm implemented in the code is described in detail. Some results of test calculations are given.

  15. Improving Computational Efficiency of Prediction in Model-based Prognostics Using the Unscented Transform

    DTIC Science & Technology

    2010-10-01

    bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction

  16. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  17. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  18. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    PubMed

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  20. Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors

    DTIC Science & Technology

    2015-03-26

    methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods

  1. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  2. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90) agreeing within ±3%, on average. Conclusions: Several aspects of OLINDA/EXM dose calculation were compared with patient-specific dose estimates obtained using Monte Carlo. Differences in patient anatomy led to large differences in cross-organ doses. However, total organ doses were still in good agreement since most of the deposited dose is due to self-irradiation. Comparison of voxelized doses calculated by Monte Carlo and the voxel S value technique showed that the 3D dose distributions produced by the respective methods are nearly identical.« less

  3. Monte Carlo based investigation of berry phase for depth resolved characterization of biomedical scattering samples

    NASA Astrophysics Data System (ADS)

    Baba, J. S.; Koju, V.; John, D.

    2015-03-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  4. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  6. Massively parallelized Monte Carlo software to calculate the light propagation in arbitrarily shaped 3D turbid media

    NASA Astrophysics Data System (ADS)

    Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin

    2017-07-01

    The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.

  7. A new approach to importance sampling for the simulation of false alarms. [in radar systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1987-01-01

    In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.

  8. Comparisons of Wilks’ and Monte Carlo Methods in Response to the 10CFR50.46(c) Proposed Rulemaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Szilard, Ronaldo; Zou, Ling

    The Nuclear Regulatory Commission (NRC) is proposing a new rulemaking on emergency core system/loss-of-coolant accident (LOCA) performance analysis. In the proposed rulemaking, designated as 10CFR50.46(c), the US NRC put forward an equivalent cladding oxidation criterion as a function of cladding pre-transient hydrogen content. The proposed rulemaking imposes more restrictive and burnup-dependent cladding embrittlement criteria; consequently nearly all the fuel rods in a reactor core need to be analyzed under LOCA conditions to demonstrate compliance to the safety limits. New analysis methods are required to provide a thorough characterization of the reactor core in order to identify the locations of themore » limiting rods as well as to quantify the safety margins under LOCA conditions. With the new analysis method presented in this work, the limiting transient case and the limiting rods can be easily identified to quantify the safety margins in response to the proposed new rulemaking. In this work, the best-estimate plus uncertainty (BEPU) analysis capability for large break LOCA with the new cladding embrittlement criteria using the RELAP5-3D code is established and demonstrated with a reduced set of uncertainty parameters. Both the direct Monte Carlo method and the Wilks’ nonparametric statistical method can be used to perform uncertainty quantification. Wilks’ method has become the de-facto industry standard to perform uncertainty quantification in BEPU LOCA analyses. Despite its widespread adoption by the industry, the use of small sample sizes to infer statement of compliance to the existing 10CFR50.46 rule, has been a major cause of unrealized operational margin in today’s BEPU methods. Moreover the debate on the proper interpretation of the Wilks’ theorem in the context of safety analyses is not fully resolved yet, even more than two decades after its introduction in the frame of safety analyses in the nuclear industry. This represents both a regulatory and application risk in rolling out new methods. With the 10CFR50.46(c) proposed rulemaking, the deficiencies of the Wilks’ approach are further exacerbated. The direct Monte Carlo approach offers a robust alternative to perform uncertainty quantification within the context of BEPU analyses. In this work, the Monte Carlo method is compared with the Wilks’ method in response to the NRC 10CFR50.46(c) proposed rulemaking.« less

  9. Monte Carlo and Molecular Dynamics in the Multicanonical Ensemble: Connections between Wang-Landau Sampling and Metadynamics

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Perez, Danny; Junghans, Christoph

    2014-03-01

    We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.

  10. Validation of a Monte Carlo Simulation of Binary Time Series.

    DTIC Science & Technology

    1981-09-18

    the probability distribution corresponding to the population from which the n sample vectors are generated. Simple unbiased estimators were chosen for...Cowcept A s*us Agew Bethesd, Marylnd H. L. Wauom Am D. RoQuE SymMS Reserch Brach , p" Ssms Delsbian September 18, 1981 DTIC EL E C T E SEP 24 =I98ST...is generated from the sample of such vectors produced by several independent replications of the Monte Carlo simulation. Then the validity of the

  11. Methods for calculating the absolute entropy and free energy of biological systems based on ideas from polymer physics.

    PubMed

    Meirovitch, Hagai

    2010-01-01

    The commonly used simulation techniques, Metropolis Monte Carlo (MC) and molecular dynamics (MD) are of a dynamical type which enables one to sample system configurations i correctly with the Boltzmann probability, P(i)(B), while the value of P(i)(B) is not provided directly; therefore, it is difficult to obtain the absolute entropy, S approximately -ln P(i)(B), and the Helmholtz free energy, F. With a different simulation approach developed in polymer physics, a chain is grown step-by-step with transition probabilities (TPs), and thus their product is the value of the construction probability; therefore, the entropy is known. Because all exact simulation methods are equivalent, i.e. they lead to the same averages and fluctuations of physical properties, one can treat an MC or MD sample as if its members have rather been generated step-by-step. Thus, each configuration i of the sample can be reconstructed (from nothing) by calculating the TPs with which it could have been constructed. This idea applies also to bulk systems such as fluids or magnets. This approach has led earlier to the "local states" (LS) and the "hypothetical scanning" (HS) methods, which are approximate in nature. A recent development is the hypothetical scanning Monte Carlo (HSMC) (or molecular dynamics, HSMD) method which is based on stochastic TPs where all interactions are taken into account. In this respect, HSMC(D) can be viewed as exact and the only approximation involved is due to insufficient MC(MD) sampling for calculating the TPs. The validity of HSMC has been established by applying it first to liquid argon, TIP3P water, self-avoiding walks (SAW), and polyglycine models, where the results for F were found to agree with those obtained by other methods. Subsequently, HSMD was applied to mobile loops of the enzymes porcine pancreatic alpha-amylase and acetylcholinesterase in explicit water, where the difference in F between the bound and free states of the loop was calculated. Currently, HSMD is being extended for calculating the absolute and relative free energies of ligand-enzyme binding. We describe the whole approach and discuss future directions. 2009 John Wiley & Sons, Ltd.

  12. RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO

    PubMed Central

    GOEL, SHARAD; SALGANIK, MATTHEW J.

    2013-01-01

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381

  13. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  14. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  15. Hydrologic Model Selection using Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Marshall, L.; Sharma, A.; Nott, D.

    2002-12-01

    Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.

  16. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  17. Applying Monte-Carlo simulations to optimize an inelastic neutron scattering system for soil carbon analysis

    USDA-ARS?s Scientific Manuscript database

    Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...

  18. Exploring Mass Perception with Markov Chain Monte Carlo

    ERIC Educational Resources Information Center

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  19. The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.

    2014-09-15

    The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less

  20. Baseball Monte Carlo Style.

    ERIC Educational Resources Information Center

    Houser, Larry L.

    1981-01-01

    Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

  1. An equation-free probabilistic steady-state approximation: dynamic application to the stochastic simulation of biochemical reaction networks.

    PubMed

    Salis, Howard; Kaznessis, Yiannis N

    2005-12-01

    Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.

  2. Accelerate quasi Monte Carlo method for solving systems of linear algebraic equations through shared memory

    NASA Astrophysics Data System (ADS)

    Lai, Siyan; Xu, Ying; Shao, Bo; Guo, Menghan; Lin, Xiaola

    2017-04-01

    In this paper we study on Monte Carlo method for solving systems of linear algebraic equations (SLAE) based on shared memory. Former research demostrated that GPU can effectively speed up the computations of this issue. Our purpose is to optimize Monte Carlo method simulation on GPUmemoryachritecture specifically. Random numbers are organized to storein shared memory, which aims to accelerate the parallel algorithm. Bank conflicts can be avoided by our Collaborative Thread Arrays(CTA)scheme. The results of experiments show that the shared memory based strategy can speed up the computaions over than 3X at most.

  3. Harnessing graphical structure in Markov chain Monte Carlo learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less

  4. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  5. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  6. Investigation of the SCS-CN initial abstraction ratio using a Monte Carlo simulation for the derived flood frequency curves

    NASA Astrophysics Data System (ADS)

    Caporali, E.; Chiarello, V.; Galeati, G.

    2014-12-01

    Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.

  7. Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space

    DOEpatents

    Schach Von Wittenau, Alexis E.

    2003-01-01

    A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.

  8. Use of Bayesian Inference in Crystallographic Structure Refinement via Full Diffraction Profile Analysis

    PubMed Central

    Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.

    2016-01-01

    A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221

  9. Estimating air chemical emissions from research activities using stack measurement data.

    PubMed

    Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V

    2013-03-01

    Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.

  10. Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.

    2001-01-01

    For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.

  11. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  12. Atomistic Monte Carlo Simulation of Lipid Membranes

    PubMed Central

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314

  13. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  14. Molecular dynamics and Monte Carlo simulations for protein-ligand binding and inhibitor design.

    PubMed

    Cole, Daniel J; Tirado-Rives, Julian; Jorgensen, William L

    2015-05-01

    Non-nucleoside inhibitors of HIV reverse transcriptase are an important component of treatment against HIV infection. Novel inhibitors are sought that increase potency against variants that contain the Tyr181Cys mutation. Molecular dynamics based free energy perturbation simulations have been run to study factors that contribute to protein-ligand binding, and the results are compared with those from previous Monte Carlo based simulations and activity data. Predictions of protein-ligand binding modes are very consistent for the two simulation methods; the accord is attributed to the use of an enhanced sampling protocol. The Tyr181Cys binding pocket supports large, hydrophobic substituents, which is in good agreement with experiment. Although some discrepancies exist between the results of the two simulation methods and experiment, free energy perturbation simulations can be used to rapidly test small molecules for gains in binding affinity. Free energy perturbation methods show promise in providing fast, reliable and accurate data that can be used to complement experiment in lead optimization projects. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  16. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice sheet volume). Continual surrogate refinement guarantees asymptotic sampling from the predictive distribution. Directly characterizing the predictive distribution in this way allows us to assess the ice sheet's sensitivity to climate variability and change.

  17. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  18. Self-learning Monte Carlo method

    DOE PAGES

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...

    2017-01-04

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less

  19. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    NASA Astrophysics Data System (ADS)

    Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.

    2012-05-01

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.

  20. Kinetic Monte Carlo simulations of the effect of the exchange control layer thickness in CoPtCrB/CoPtCrSiO granular media

    NASA Astrophysics Data System (ADS)

    Almudallal, Ahmad M.; Mercer, J. I.; Whitehead, J. P.; Plumer, M. L.; van Ek, J.

    2018-05-01

    A hybrid Landau Lifshitz Gilbert/kinetic Monte Carlo algorithm is used to simulate experimental magnetic hysteresis loops for dual layer exchange coupled composite media. The calculation of the rate coefficients and difficulties arising from low energy barriers, a fundamental problem of the kinetic Monte Carlo method, are discussed and the methodology used to treat them in the present work is described. The results from simulations are compared with experimental vibrating sample magnetometer measurements on dual layer CoPtCrB/CoPtCrSiO media and a quantitative relationship between the thickness of the exchange control layer separating the layers and the effective exchange constant between the layers is obtained. Estimates of the energy barriers separating magnetically reversed states of the individual grains in zero applied field as well as the saturation field at sweep rates relevant to the bit write speeds in magnetic recording are also presented. The significance of this comparison between simulations and experiment and the estimates of the material parameters obtained from it are discussed in relation to optimizing the performance of magnetic storage media.

  1. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    NASA Astrophysics Data System (ADS)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-04-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We show how to estimate the statistical uncertainty given the output of just a single radiative-transfer simulation in which the number of photon packets follows a Poisson distribution and the weight (e.g. energy or luminosity) of a single packet may follow an arbitrary distribution. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalise existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  2. Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations

    NASA Astrophysics Data System (ADS)

    Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.

    2018-07-01

    One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.

  3. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Eva E.; Martin, William R.

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  4. Monte Carlo isotopic inventory analysis for complex nuclear systems

    NASA Astrophysics Data System (ADS)

    Phruksarojanakun, Phiphat

    Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.

  5. Rare Event Simulation in Radiation Transport

    NASA Astrophysics Data System (ADS)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.

  6. On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes

    DOE PAGES

    Davidson, Eva E.; Martin, William R.

    2017-05-26

    Current Monte Carlo codes use one of three models: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S(α,β) model, depending on the neutron energy and the specific Monte Carlo code. This thesis addresses the consequences of using the free gas scattering model, which assumes that the neutron interacts with atoms in thermal motion in a monatomic gas in thermal equilibrium at material temperature, T. Most importantly, the free gas model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not formore » heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that the exact resonance scattering model is temperaturedependent, and neglecting the resonances in the lower epithermal range can under-predict resonance absorption due to the upscattering phenomenon mentioned above, leading to an over-prediction of keff by several hundred pcm. Existing methods to address this issue involve changing the neutron weights or implementing an extra rejection scheme in the free gas sampling scheme, and these all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame to continue the random walk of the neutron. The goal of this paper was to develop a sampling methodology that (1) accounted for the energydependent scattering cross sections in the collision analysis and (2) was performed in the laboratory frame,avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials (2nd and 4th order) to approximate the scattering cross section in Blackshaw’s equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using methods developed in this dissertation showed very close comparison to results using the reference Dopplerbroadened rejection correction (DBRC) scheme.« less

  7. Time-dependent integral equations of neutron transport for calculating the kinetics of nuclear reactors by the Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.

    2016-12-15

    Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.

  8. Analyzing the photonic band gaps in two-dimensional plasma photonic crystals with fractal Sierpinski gasket structure based on the Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hai-Feng, E-mail: hanlor@163.com; Key Laboratory of Radar Imaging and Microwave Photonics; Liu, Shao-Bin

    2016-08-15

    In this paper, the properties of photonic band gaps (PBGs) in two types of two-dimensional plasma-dielectric photonic crystals (2D PPCs) under a transverse-magnetic (TM) wave are theoretically investigated by a modified plane wave expansion (PWE) method where Monte Carlo method is introduced. The proposed PWE method can be used to calculate the band structures of 2D PPCs which possess arbitrary-shaped filler and any lattice. The efficiency and convergence of the present method are discussed by a numerical example. The configuration of 2D PPCs is the square lattices with fractal Sierpinski gasket structure whose constituents are homogeneous and isotropic. The type-1more » PPCs is filled with the dielectric cylinders in the plasma background, while its complementary structure is called type-2 PPCs, in which plasma cylinders behave as the fillers in the dielectric background. The calculated results reveal that the enough accuracy and good convergence can be obtained, if the number of random sampling points of Monte Carlo method is large enough. The band structures of two types of PPCs with different fractal orders of Sierpinski gasket structure also are theoretically computed for a comparison. It is demonstrate that the PBGs in higher frequency region are more easily produced in the type-1 PPCs rather than in the type-2 PPCs. Sierpinski gasket structure introduced in the 2D PPCs leads to a larger cutoff frequency, enhances and induces more PBGs in high frequency region. The effects of configurational parameters of two types of PPCs on the PBGs are also investigated in detail. The results show that the PBGs of the PPCs can be easily manipulated by tuning those parameters. The present type-1 PPCs are more suitable to design the tunable compacted devices.« less

  9. Adaptive measurement selection for progressive damage estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  10. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less

  11. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids

    NASA Astrophysics Data System (ADS)

    Kim, Jeongnim; Baczewski, Andrew D.; Beaudet, Todd D.; Benali, Anouar; Chandler Bennett, M.; Berrill, Mark A.; Blunt, Nick S.; Josué Landinez Borda, Edgar; Casula, Michele; Ceperley, David M.; Chiesa, Simone; Clark, Bryan K.; Clay, Raymond C., III; Delaney, Kris T.; Dewing, Mark; Esler, Kenneth P.; Hao, Hongxia; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M. Graham; Luo, Ye; Malone, Fionn D.; Martin, Richard M.; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A.; Mitas, Lubos; Morales, Miguel A.; Neuscamman, Eric; Parker, William D.; Pineda Flores, Sergio D.; Romero, Nichols A.; Rubenstein, Brenda M.; Shea, Jacqueline A. R.; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F.; Townsend, Joshua P.; Tubman, Norm M.; Van Der Goetz, Brett; Vincent, Jordan E.; ChangMo Yang, D.; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-01

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater–Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  12. QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids.

    PubMed

    Kim, Jeongnim; Baczewski, Andrew T; Beaudet, Todd D; Benali, Anouar; Bennett, M Chandler; Berrill, Mark A; Blunt, Nick S; Borda, Edgar Josué Landinez; Casula, Michele; Ceperley, David M; Chiesa, Simone; Clark, Bryan K; Clay, Raymond C; Delaney, Kris T; Dewing, Mark; Esler, Kenneth P; Hao, Hongxia; Heinonen, Olle; Kent, Paul R C; Krogel, Jaron T; Kylänpää, Ilkka; Li, Ying Wai; Lopez, M Graham; Luo, Ye; Malone, Fionn D; Martin, Richard M; Mathuriya, Amrita; McMinis, Jeremy; Melton, Cody A; Mitas, Lubos; Morales, Miguel A; Neuscamman, Eric; Parker, William D; Pineda Flores, Sergio D; Romero, Nichols A; Rubenstein, Brenda M; Shea, Jacqueline A R; Shin, Hyeondeok; Shulenburger, Luke; Tillack, Andreas F; Townsend, Joshua P; Tubman, Norm M; Van Der Goetz, Brett; Vincent, Jordan E; Yang, D ChangMo; Yang, Yubo; Zhang, Shuai; Zhao, Luning

    2018-05-16

    QMCPACK is an open source quantum Monte Carlo package for ab initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wavefunctions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary-field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performance computing architectures, including multicore central processing unit and graphical processing unit systems. We detail the program's capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://qmcpack.org.

  13. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F.

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization frommore » single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.« less

  14. Numerically Exact Long Time Magnetization Dynamics Near the Nonequilibrium Kondo Regime

    NASA Astrophysics Data System (ADS)

    Cohen, Guy; Gull, Emanuel; Reichman, David; Millis, Andrew; Rabani, Eran

    2013-03-01

    The dynamical and steady-state spin response of the nonequilibrium Anderson impurity model to magnetic fields, bias voltages, and temperature is investigated by a numerically exact method which allows access to unprecedentedly long times. The method is based on using real, continuous time bold Monte Carlo techniques--quantum Monte Carlo sampling of diagrammatic corrections to a partial re-summation--in order to compute the kernel of a memory function, which is then used to determine the reduced density matrix. The method owes its effectiveness to the fact that the memory kernel is dominated by relatively short-time properties even when the system's dynamics are long-ranged. We make predictions regarding the non-monotonic temperature dependence of the system at high bias voltage and the oscillatory quench dynamics at high magnetic fields. We also discuss extensions of the method to the computation of transport properties and correlation functions, and its suitability as an impurity solver free from the need for analytical continuation in the context of dynamical mean field theory. This work is supported by the US Department of Energy under grant DE-SC0006613, by NSF-DMR-1006282 and by the US-Israel Binational Science Foundation. GC is grateful to the Yad Hanadiv-Rothschild Foundation for the award of a Rothschild Fellowship.

  15. Free energy landscape from path-sampling: application to the structural transition in LJ38

    NASA Astrophysics Data System (ADS)

    Adjanor, G.; Athènes, M.; Calvo, F.

    2006-09-01

    We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.

  16. On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference

    NASA Astrophysics Data System (ADS)

    Hu, Zixi; Yao, Zhewei; Li, Jinglai

    2017-03-01

    Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.

  17. Resonance line transfer calculations by doubling thin layers. I - Comparison with other techniques. II - The use of the R-parallel redistribution function. [planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Yelle, Roger V.; Wallace, Lloyd

    1989-01-01

    A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.

  18. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  19. Development of accelerated Raman and fluorescent Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dumont, Alexander P.; Patil, Chetan

    2018-02-01

    Monte Carlo (MC) modeling of photon propagation in turbid media is an essential tool for understanding optical interactions between light and tissue. Insight gathered from outputs of MC models assists in mapping between detected optical signals and bulk tissue optical properties, and as such, has proven useful for inverse calculations of tissue composition and optimization of the design of optical probes. MC models of Raman scattering have previously been implemented without consideration to background autofluorescence, despite its presence in raw measurements. Modeling both Raman and fluorescence profiles at high spectral resolution requires a significant increase in computation, but is more appropriate for investigating issues such as detection limits. We present a new Raman Fluorescence MC model developed atop an existing GPU parallelized MC framework that can run more than 300x times faster than CPU methods. The robust acceleration allows for the efficient production of both Raman and fluorescence outputs from the MC model. In addition, this model can handle arbitrary sample morphologies of excitation and collection geometries to more appropriately mimic experimental settings. We will present the model framework and initial results.

  20. SU-F-T-148: Are the Approximations in Analytic Semi-Empirical Dose Calculation Algorithms for Intensity Modulated Proton Therapy for Complex Heterogeneities of Head and Neck Clinically Significant?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yepes, P; UT MD Anderson Cancer Center, Houston, TX; Titt, U

    2016-06-15

    Purpose: Evaluate the differences in dose distributions between the proton analytic semi-empirical dose calculation algorithm used in the clinic and Monte Carlo calculations for a sample of 50 head-and-neck (H&N) patients and estimate the potential clinical significance of the differences. Methods: A cohort of 50 H&N patients, treated at the University of Texas Cancer Center with Intensity Modulated Proton Therapy (IMPT), were selected for evaluation of clinical significance of approximations in computed dose distributions. H&N site was selected because of the highly inhomogeneous nature of the anatomy. The Fast Dose Calculator (FDC), a fast track-repeating accelerated Monte Carlo algorithm formore » proton therapy, was utilized for the calculation of dose distributions delivered during treatment plans. Because of its short processing time, FDC allows for the processing of large cohorts of patients. FDC has been validated versus GEANT4, a full Monte Carlo system and measurements in water and for inhomogeneous phantoms. A gamma-index analysis, DVHs, EUDs, and TCP and NTCPs computed using published models were utilized to evaluate the differences between the Treatment Plan System (TPS) and FDC. Results: The Monte Carlo results systematically predict lower dose delivered in the target. The observed differences can be as large as 8 Gy, and should have a clinical impact. Gamma analysis also showed significant differences between both approaches, especially for the target volumes. Conclusion: Monte Carlo calculations with fast algorithms is practical and should be considered for the clinic, at least as a treatment plan verification tool.« less

  1. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    PubMed

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  2. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface.more » We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.« less

  3. Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research

    USGS Publications Warehouse

    Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.

    2002-01-01

    Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.

  4. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    PubMed

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review

    PubMed Central

    Paquet, Eric; Viktor, Herna L.

    2015-01-01

    Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262

  6. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  7. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  8. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  9. Diabat Interpolation for Polymorph Free-Energy Differences.

    PubMed

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  10. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less

  11. Backward and forward Monte Carlo method for vector radiative transfer in a two-dimensional graded index medium

    NASA Astrophysics Data System (ADS)

    Qian, Lin-Feng; Shi, Guo-Dong; Huang, Yong; Xing, Yu-Ming

    2017-10-01

    In vector radiative transfer, backward ray tracing is seldom used. We present a backward and forward Monte Carlo method to simulate vector radiative transfer in a two-dimensional graded index medium, which is new and different from the conventional Monte Carlo method. The backward and forward Monte Carlo method involves dividing the ray tracing into two processes backward tracing and forward tracing. In multidimensional graded index media, the trajectory of a ray is usually a three-dimensional curve. During the transport of a polarization ellipse, the curved ray trajectory will induce geometrical effects and cause Stokes parameters to continuously change. The solution processes for a non-scattering medium and an anisotropic scattering medium are analysed. We also analyse some parameters that influence the Stokes vector in two-dimensional graded index media. The research shows that the Q component of the Stokes vector cannot be ignored. However, the U and V components of the Stokes vector are very small.

  12. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  13. Dynamic Monte Carlo description of thermal desorption processes

    NASA Astrophysics Data System (ADS)

    Weinketz, Sieghard

    1994-07-01

    The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.

  14. Transforming high-dimensional potential energy surfaces into sum-of-products form using Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schröder, Markus; Meyer, Hans-Dieter

    2017-08-01

    We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.

  15. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    ERIC Educational Resources Information Center

    Newman, Isadore; And Others

    1979-01-01

    A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)

  16. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    ERIC Educational Resources Information Center

    Newman, Isadore; And Others

    A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…

  17. Representation of complex probabilities and complex Gibbs sampling

    NASA Astrophysics Data System (ADS)

    Salcedo, Lorenzo Luis

    2018-03-01

    Complex weights appear in Physics which are beyond a straightforward importance sampling treatment, as required in Monte Carlo calculations. This is the wellknown sign problem. The complex Langevin approach amounts to effectively construct a positive distribution on the complexified manifold reproducing the expectation values of the observables through their analytical extension. Here we discuss the direct construction of such positive distributions paying attention to their localization on the complexified manifold. Explicit localized representations are obtained for complex probabilities defined on Abelian and non Abelian groups. The viability and performance of a complex version of the heat bath method, based on such representations, is analyzed.

  18. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    PubMed

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  19. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  20. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  1. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Respondent-driven sampling as Markov chain Monte Carlo.

    PubMed

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  3. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  4. Light propagation along the pericardium meridian at human wrist as evidenced by the optical experiment and Monte Carlo method.

    PubMed

    Jiang, Yi-fan; Chen, Chang-shui; Liu, Xiao-mei; Liu, Rong-ting; Liu, Song-hao

    2015-04-01

    To explore the characteristics of light propagation along the Pericardium Meridian and its surrounding areas at human wrist by using optical experiment and Monte Carlo method. An experiment was carried out to obtain the distribution of diffuse light on Pericardium Meridian line and its surrounding areas at the wrist, and then a simplified model based on the anatomical structure was proposed to simulate the light transportation within the same area by using Monte Carlo method. The experimental results showed strong accordance with the Monte Carlo simulation that the light propagation along the Pericardium Meridian had an advantage over its surrounding areas at the wrist. The advantage of light transport along Pericardium Merdian line was related to components and structure of tissue, also the anatomical structure of the area that the Pericardium Meridian line runs.

  5. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  6. Geometry and Dynamics for Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2018-03-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.

  7. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.

    The primary goal of this project is to evaluate x-ray spectra generated within a scanning electron microscope (SEM) to determine elemental composition of small samples. This will be accomplished by performing Monte Carlo simulations of the electron and photon interactions in the sample and in the x-ray detector. The elemental inventories will be determined by an inverse process that progressively reduces the difference between the measured and simulated x-ray spectra by iteratively adjusting composition and geometric variables in the computational model. The intended benefit of this work will be to develop a method to perform quantitative analysis on substandard samplesmore » (heterogeneous phases, rough surfaces, small sizes, etc.) without involving standard elemental samples or empirical matrix corrections (i.e., true standardless quantitative analysis).« less

  9. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  10. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bates, Cameron Russell; Mckigney, Edward Allen

    The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.

  11. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  12. Verification of unfold error estimates in the unfold operator code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less

  13. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  14. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  15. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  16. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  17. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  18. Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces

    NASA Astrophysics Data System (ADS)

    Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz

    2018-03-01

    In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.

  19. An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models.

    PubMed

    Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France

    2016-10-01

    Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

Top