Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.
ERIC Educational Resources Information Center
Reddon, John R.; And Others
1985-01-01
Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
ERIC Educational Resources Information Center
Vasu, Ellen Storey
1978-01-01
The effects of the violation of the assumption of normality in the conditional distributions of the dependent variable, coupled with the condition of multicollinearity upon the outcome of testing the hypothesis that the regression coefficient equals zero, are investigated via a Monte Carlo study. (Author/JKS)
A Monte Carlo Program for Simulating Selection Decisions from Personnel Tests
ERIC Educational Resources Information Center
Petersen, Calvin R.; Thain, John W.
1976-01-01
Relative to test and criterion parameters and cutting scores, the correlation coefficient, sample size, and number of samples to be drawn (all inputs), this program calculates decision classification rates across samples and for combined samples. Several other related indices are also computed. (Author)
NASA Astrophysics Data System (ADS)
Šantić, Branko; Gracin, Davor
2017-12-01
A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.
Monte Carlo sampling in diffusive dynamical systems
NASA Astrophysics Data System (ADS)
Tapias, Diego; Sanders, David P.; Altmann, Eduardo G.
2018-05-01
We introduce a Monte Carlo algorithm to efficiently compute transport properties of chaotic dynamical systems. Our method exploits the importance sampling technique that favors trajectories in the tail of the distribution of displacements, where deviations from a diffusive process are most prominent. We search for initial conditions using a proposal that correlates states in the Markov chain constructed via a Metropolis-Hastings algorithm. We show that our method outperforms the direct sampling method and also Metropolis-Hastings methods with alternative proposals. We test our general method through numerical simulations in 1D (box-map) and 2D (Lorentz gas) systems.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.
Hittner, James B; May, Kim
2012-01-01
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
Bold Diagrammatic Monte Carlo for Fermionic and Fermionized Systems
NASA Astrophysics Data System (ADS)
Svistunov, Boris
2013-03-01
In three different fermionic cases--repulsive Hubbard model, resonant fermions, and fermionized spins-1/2 (on triangular lattice)--we observe the phenomenon of sign blessing: Feynman diagrammatic series features finite convergence radius despite factorial growth of the number of diagrams with diagram order. Bold diagrammatic Monte Carlo technique allows us to sample millions of skeleton Feynman diagrams. With the universal fermionization trick we can fermionize essentially any (bosonic, spin, mixed, etc.) lattice system. The combination of fermionization and Bold diagrammatic Monte Carlo yields a universal first-principle approach to strongly correlated lattice systems, provided the sign blessing is a generic fermionic phenomenon. Supported by NSF and DARPA
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
Bold Diagrammatic Monte Carlo Method Applied to Fermionized Frustrated Spins
NASA Astrophysics Data System (ADS)
Kulagin, S. A.; Prokof'ev, N.; Starykh, O. A.; Svistunov, B.; Varney, C. N.
2013-02-01
We demonstrate, by considering the triangular lattice spin-1/2 Heisenberg model, that Monte Carlo sampling of skeleton Feynman diagrams within the fermionization framework offers a universal first-principles tool for strongly correlated lattice quantum systems. We observe the fermionic sign blessing—cancellation of higher order diagrams leading to a finite convergence radius of the series. We calculate the magnetic susceptibility of the triangular-lattice quantum antiferromagnet in the correlated paramagnet regime and reveal a surprisingly accurate microscopic correspondence with its classical counterpart at all accessible temperatures. The extrapolation of the observed relation to zero temperature suggests the absence of the magnetic order in the ground state. We critically examine the implications of this unusual scenario.
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models
NASA Astrophysics Data System (ADS)
Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele
2017-11-01
Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.
Implication of correlations among some common stability statistics - a Monte Carlo simulations.
Piepho, H P
1995-03-01
Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; Koju, Vijay; John, Dwayne O
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scatteringmore » sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.« less
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Derian, R; Tokár, K; Somogyi, B; Gali, Á; Štich, I
2017-12-12
We present a time-dependent density functional theory (TDDFT) study of the optical gaps of light-emitting nanomaterials, namely, pristine and heavily B- and P-codoped silicon crystalline nanoparticles. Twenty DFT exchange-correlation functionals sampled from the best currently available inventory such as hybrids and range-separated hybrids are benchmarked against ultra-accurate quantum Monte Carlo results on small model Si nanocrystals. Overall, the range-separated hybrids are found to perform best. The quality of the DFT gaps is correlated with the deviation from Koopmans' theorem as a possible quality guide. In addition to providing a generic test of the ability of TDDFT to describe optical properties of silicon crystalline nanoparticles, the results also open up a route to benchmark-quality DFT studies of nanoparticle sizes approaching those studied experimentally.
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
NASA Astrophysics Data System (ADS)
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
A Sequential Monte Carlo Approach for Streamflow Forecasting
NASA Astrophysics Data System (ADS)
Hsu, K.; Sorooshian, S.
2008-12-01
As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.
MontePython 3: Parameter inference code for cosmology
NASA Astrophysics Data System (ADS)
Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon
2018-05-01
MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.
Bishara, Anthony J; Hittner, James B
2015-10-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
Hittner, James B.
2014-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.
2016-01-01
Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543
Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J
2016-01-01
A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.
Weak lensing magnification in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Garcia-Fernandez, M.; Sanchez, E.; Sevilla-Noarbe, I.; Suchyta, E.; Huff, E. M.; Gaztanaga, E.; Aleksić, J.; Ponce, R.; Castander, F. J.; Hoyle, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Jarvis, M.; Kirk, D.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; MacCrann, N.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Miquel, R.; Mohr, J. J.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Scarpine, V.; Schubnell, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thomas, D.; Walker, A. R.; Wester, W.; DES Collaboration
2018-05-01
In this paper, the effect of weak lensing magnification on galaxy number counts is studied by cross-correlating the positions of two galaxy samples, separated by redshift, using the Dark Energy Survey Science Verification data set. This analysis is carried out for galaxies that are selected only by its photometric redshift. An extensive analysis of the systematic effects, using new methods based on simulations is performed, including a Monte Carlo sampling of the selection function of the survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Li, Jian; Wang, Yafei; Kong, Dongdong; Wang, Jinsheng; Teng, Yanguo; Li, Na
2015-11-01
In the present study, re-combined estrogen receptor (ER) and androgen receptor (AR) gene yeast assays combined with a novel approach based on Monte Carlo simulation were used for evaluation and characterization of soil samples collected from Jilin along the Second Songhua River to assess their antagonist/agonist properties for ER and AR. The results showed that estrogenic activity only occurred in the soil samples collected in the agriculture area, but most soil samples showed anti-estrogenic activities, and the bioassay-derived 4-hydroxytamoxifen equivalents ranged from N.D. to 23.51 μg/g. Hydrophilic substance fractions were determined as potential contributors associated with anti-estrogenic activity in these soil samples. Moreover, none of the soil samples exhibited AR agonistic potency, whereas 54% of the soil samples exhibited AR antagonistic potency. The flutamide equivalents varied between N.D. and 178.05 μg/g. Based on Monte Carlo simulation-related mass balance analysis, the AR antagonistic activities were significantly correlated with the media polar and polar fractions. All of these results support that this novel calculation method can be adopted effectively to quantify and characterize the ER/AR agonists and antagonists of the soil samples, and these data could help provide useful information for future management and remediation efforts.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
Chodera, John D; Shirts, Michael R
2011-11-21
The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.
NASA Astrophysics Data System (ADS)
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
Accurate Exchange-Correlation Energies for the Warm Dense Electron Gas.
Malone, Fionn D; Blunt, N S; Brown, Ethan W; Lee, D K K; Spencer, J S; Foulkes, W M C; Shepherd, James J
2016-09-09
The density matrix quantum Monte Carlo (DMQMC) method is used to sample exact-on-average N-body density matrices for uniform electron gas systems of up to 10^{124} matrix elements via a stochastic solution of the Bloch equation. The results of these calculations resolve a current debate over the accuracy of the data used to parametrize finite-temperature density functionals. Exchange-correlation energies calculated using the real-space restricted path-integral formalism and the k-space configuration path-integral formalism disagree by up to ∼10% at certain reduced temperatures T/T_{F}≤0.5 and densities r_{s}≤1. Our calculations confirm the accuracy of the configuration path-integral Monte Carlo results available at high density and bridge the gap to lower densities, providing trustworthy data in the regime typical of planetary interiors and solids subject to laser irradiation. We demonstrate that the DMQMC method can calculate free energies directly and present exact free energies for T/T_{F}≥1 and r_{s}≤2.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander
2015-02-28
Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
NASA Astrophysics Data System (ADS)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Harmonic-phase path-integral approximation of thermal quantum correlation functions
NASA Astrophysics Data System (ADS)
Robertson, Christopher; Habershon, Scott
2018-03-01
We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroki; Yamamoto, Toshihiro
2017-09-01
This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Kawrakow, I
2000-03-01
In this report the condensed history Monte Carlo simulation of electron transport and its application to the calculation of ion chamber response is discussed. It is shown that the strong step-size dependencies and lack of convergence to the correct answer previously observed are the combined effect of the following artifacts caused by the EGS4/PRESTA implementation of the condensed history technique: dose underprediction due to PRESTA'S pathlength correction and lateral correlation algorithm; dose overprediction due to the boundary crossing algorithm; dose overprediction due to the breakdown of the fictitious cross section method for sampling distances between discrete interaction and the inaccurate evaluation of energy-dependent quantities. These artifacts are now understood quantitatively and analytical expressions for their effect are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackman, T.M.
1987-01-01
A theoretical investigation of the interaction potential between the helium atom and the antihydrogen atom was performed for the purpose of determining the feasibility of antihydrogen atom containment. The interaction potential showed an energy barrier to collapse of this system. A variational estimate of the height of this energy barrier and estimates of lifetime with respect to electron-positron annihilation were determined by the Variational Monte Carlo method. This calculation allowed for an improvement over an SCF result through the inclusion of explicit correlation factors in the trial wave function. An estimate of the correlation energy of this system was determinedmore » by the Green's Function Monte Carlo (GFMC) method.« less
Quantum Monte Carlo study of spin correlations in the one-dimensional Hubbard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandvik, A.W.; Scalapino, D.J.; Singh, C.
1993-07-15
The one-dimensional Hubbard model is studied at and close to half-filling using a generalization of Handscomb's quantum Monte Carlo method. Results for spin-correlation functions and susceptibilities are presented for systems of up to 128 sites. The spin-correlation function at low temperature is well described by a recently introduced formula relating the correlation function of a finite periodic system to the corresponding [ital T]=0 correlation function of the infinite system. For the [ital T][r arrow]0 divergence of the [ital q]=2[ital k][sub [ital F
Projection correlation between two random vectors.
Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei
2017-12-01
We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
[Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].
Furuta, Takuya
2017-01-01
Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.
Estimating air chemical emissions from research activities using stack measurement data.
Ballinger, Marcel Y; Duchsherer, Cheryl J; Woodruff, Rodger K; Larson, Timothy V
2013-03-01
Current methods of estimating air emissions from research and development (R&D) activities use a wide range of release fractions or emission factors with bases ranging from empirical to semi-empirical. Although considered conservative, the uncertainties and confidence levels of the existing methods have not been reported. Chemical emissions were estimated from sampling data taken from four research facilities over 10 years. The approach was to use a Monte Carlo technique to create distributions of annual emission estimates for target compounds detected in source test samples. Distributions were created for each year and building sampled for compounds with sufficient detection frequency to qualify for the analysis. The results using the Monte Carlo technique without applying a filter to remove negative emission values showed almost all distributions spanning zero, and 40% of the distributions having a negative mean. This indicates that emissions are so low as to be indistinguishable from building background. Application of a filter to allow only positive values in the distribution provided a more realistic value for emissions and increased the distribution mean by an average of 16%. Release fractions were calculated by dividing the emission estimates by a building chemical inventory quantity. Two variations were used for this quantity: chemical usage, and chemical usage plus one-half standing inventory. Filters were applied so that only release fraction values from zero to one were included in the resulting distributions. Release fractions had a wide range among chemicals and among data sets for different buildings and/or years for a given chemical. Regressions of release fractions to molecular weight and vapor pressure showed weak correlations. Similarly, regressions of mean emissions to chemical usage, chemical inventory, molecular weight, and vapor pressure also gave weak correlations. These results highlight the difficulties in estimating emissions from R&D facilities using chemical inventory data. Air emissions from research operations are difficult to estimate because of the changing nature of research processes and the small quantity and wide variety of chemicals used. Analysis of stack measurements taken over multiple facilities and a 10-year period using a Monte Carlo technique provided a method to quantify the low emissions and to estimate release fractions based on chemical inventories. The variation in release fractions did not correlate well with factors investigated, confirming the complexities in estimating R&D emissions.
Huang, Yili; Feng, Hao; Lu, Hang; Zeng, Yanhua
2017-07-01
It is believed that sphingomonads are ubiquitously distributed in environments. However detailed information about their community structure and their co-relationship with environmental parameters remain unclear. In this study, novel sphingomonads-specific primers based on the 16S rRNA gene were designed to investigate the distribution of sphingomonads in 10 different niches. Both in silico and in-practice tests on pure cultures and environmental samples showed that Sph384f/Sph701r was an efficient primer set. Illumina MiSeq sequencing revealed that community structures of sphingomonads were significantly different among the 10 samples, although 12 sphingomonad genera were present in all samples. Based on RDA analysis and Monte Carlo permutation test, sphingomonad community structure was significantly correlated with limnetic and marine habitat types. Among these niches, the genus Sphingomicrobium showed strong positive correlation with marine habitats, whereas genera Sphingobium, Novosphingobium, Sphingopyxis, and Sphingorhabdus showed strong positive correlation with limnetic habitats. Our study provided direct evidence that sphingomonads are ubiquitously distributed in environments, and revealed for the first time that their community structure can be correlated with habitats.
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Determination of Rolling-Element Fatigue Life From Computer Generated Bearing Tests
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2003-01-01
Two types of rolling-element bearings representing radial loaded and thrust loaded bearings were used for this study. Three hundred forty (340) virtual bearing sets totaling 31400 bearings were randomly assembled and tested by Monte Carlo (random) number generation. The Monte Carlo results were compared with endurance data from 51 bearing sets comprising 5321 bearings. A simple algebraic relation was established for the upper and lower L(sub 10) life limits as function of number of bearings failed for any bearing geometry. There is a fifty percent (50 percent) probability that the resultant bearing life will be less than that calculated. The maximum and minimum variation between the bearing resultant life and the calculated life correlate with the 90-percent confidence limits for a Weibull slope of 1.5. The calculated lives for bearings using a load-life exponent p of 4 for ball bearings and 5 for roller bearings correlated with the Monte Carlo generated bearing lives and the bearing data. STLE life factors for bearing steel and processing provide a reasonable accounting for differences between bearing life data and calculated life. Variations in Weibull slope from the Monte Carlo testing and bearing data correlated. There was excellent agreement between percent of individual components failed from Monte Carlo simulation and that predicted.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts
Onken, Arno; Dragoi, Valentin; Obermayer, Klaus
2012-01-01
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Methods for Monte Carlo simulations of biomacromolecules
Vitalis, Andreas; Pappu, Rohit V.
2010-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies. PMID:20428473
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Prettyman, T. H.; Gardner, R. P.; Verghese, K.
1993-08-01
A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.
Light propagation in tissues with controlled optical properties
NASA Astrophysics Data System (ADS)
Tuchin, Valery V.; Maksimova, Irina L.; Zimnyakov, Dmitry A.; Kon, Irina L.; Mavlyutov, Albert H.; Mishin, Alexey A.
1997-10-01
Theoretical and computer modeling approaches, such as Mie theory, radiative transfer theory, diffusion wave correlation spectroscopy, and Monte Carlo simulation were used to analyze tissue optics during a process of optical clearing due to refractive index matching. Continuous wave transmittance and forward scattering measurement as well as intensity correlation experiments were used to monitor tissue structural and optical properties. As a control, tissue samples of the human sclera were taken. Osmotically active solutions, such as Trazograph, glucose, and polyethylene glycol, were used as chemicals. A characteristic time response of human scleral optical clearing the range 3 to 10 min was determined. The diffusion coefficients describing the permeability of the scleral samples to Trazograph were experimentally estimated; the average value was DT approximately equals (0.9 +/- 0.5) X 10-5 cm2/s. The results are general and can be used to describe many other fibrous tissues.
Preserving correlations between trajectories for efficient path sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingrich, Todd R.; Geissler, Phillip L.; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2015-06-21
Importance sampling of trajectories has proved a uniquely successful strategy for exploring rare dynamical behaviors of complex systems in an unbiased way. Carrying out this sampling, however, requires an ability to propose changes to dynamical pathways that are substantial, yet sufficiently modest to obtain reasonable acceptance rates. Satisfying this requirement becomes very challenging in the case of long trajectories, due to the characteristic divergences of chaotic dynamics. Here, we examine schemes for addressing this problem, which engineer correlation between a trial trajectory and its reference path, for instance using artificial forces. Our analysis is facilitated by a modern perspective onmore » Markov chain Monte Carlo sampling, inspired by non-equilibrium statistical mechanics, which clarifies the types of sampling strategies that can scale to long trajectories. Viewed in this light, the most promising such strategy guides a trial trajectory by manipulating the sequence of random numbers that advance its stochastic time evolution, as done in a handful of existing methods. In cases where this “noise guidance” synchronizes trajectories effectively, as the Glauber dynamics of a two-dimensional Ising model, we show that efficient path sampling can be achieved for even very long trajectories.« less
Gutzwiller Monte Carlo approach for a critical dissipative spin model
NASA Astrophysics Data System (ADS)
Casteels, Wim; Wilson, Ryan M.; Wouters, Michiel
2018-06-01
We use the Gutzwiller Monte Carlo approach to simulate the dissipative X Y Z model in the vicinity of a dissipative phase transition. This approach captures classical spatial correlations together with the full on-site quantum behavior while neglecting nonlocal quantum effects. By considering finite two-dimensional lattices of various sizes, we identify a ferromagnetic and two paramagnetic phases, in agreement with earlier studies. The greatly reduced numerical complexity of the Gutzwiller Monte Carlo approach facilitates efficient simulation of relatively large lattice sizes. The inclusion of the spatial correlations allows to capture parts of the phase diagram that are completely missed by the widely applied Gutzwiller decoupling of the density matrix.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baba, Justin S; John, Dwayne O; Koju, Vijay
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case formore » many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.« less
Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources
NASA Astrophysics Data System (ADS)
Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi
2017-01-01
Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.
Optical variability properties of the largest AGN sample observed with Kepler/K2
NASA Astrophysics Data System (ADS)
Aranzana, E.; Koerding, E.; Uttley, P.; Scaringi, S.; Steven, B.
2017-10-01
We present the first short time-scale ( hours to days) optical variability study of a large sample of Active Galactic Nuclei (AGN) observed with the Kepler/K2 mission. The sample contains 275 AGN observed over four campaigns with ˜30-minute cadence selected from the Million Quasar Catalogue with R magnitude < 19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of the AGN in our sample. The average power-law slope is 2.5±0.5, steeper than the PSDs observed in X-rays, and the rest-frame amplitude variability in the frequency range of 6×10^{-6}-10^{-4} Hz varies from 1-10 % with an average of 2.6 %. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift, but no such correlation with luminosity. We attribute these effects to the known 'bluer when brighter variability of quasars combined with the fixed bandpass of Kepler. This study enables us to distinguish between Seyferts and Blazar and confirm AGN candidates.
Thomas B. Lynch; Jeffrey H. Gove
2014-01-01
The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...
NASA Astrophysics Data System (ADS)
Verbeke, Jérôme M.; Petit, Odile; Chebboubi, Abdelhazize; Litaize, Olivier
2018-01-01
Fission modeling in general-purpose Monte Carlo transport codes often relies on average nuclear data provided by international evaluation libraries. As such, only average fission multiplicities are available and correlations between fission neutrons and photons are missing. Whereas uncorrelated fission physics is usually sufficient for standard reactor core and radiation shielding calculations, correlated fission secondaries are required for specialized nuclear instrumentation and detector modeling. For coincidence counting detector optimization for instance, precise simulation of fission neutrons and photons that remain correlated in time from birth to detection is essential. New developments were recently integrated into the Monte Carlo transport code TRIPOLI-4 to model fission physics more precisely, the purpose being to access event-by-event fission events from two different fission models: FREYA and FIFRELIN. TRIPOLI-4 simulations can now be performed, either by connecting via an API to the LLNL fission library including FREYA, or by reading external fission event data files produced by FIFRELIN beforehand. These new capabilities enable us to easily compare results from Monte Carlo transport calculations using the two fission models in a nuclear instrumentation application. In the first part of this paper, broad underlying principles of the two fission models are recalled. We then present experimental measurements of neutron angular correlations for 252Cf(sf) and 240Pu(sf). The correlations were measured for several neutron kinetic energy thresholds. In the latter part of the paper, simulation results are compared to experimental data. Spontaneous fissions in 252Cf and 240Pu are modeled by FREYA or FIFRELIN. Emitted neutrons and photons are subsequently transported to an array of scintillators by TRIPOLI-4 in analog mode to preserve their correlations. Angular correlations between fission neutrons obtained independently from these TRIPOLI-4 simulations, using either FREYA or FIFRELIN, are compared to experimental results. For 240Pu(sf), the measured correlations were used to tune the model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Pengfei; Wang, Qiang, E-mail: q.wang@colostate.edu
2014-01-28
Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamicsmore » (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-Landau–Optimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the θ-solvent is also considered in some cases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianxing; Lin, Hai-Qing; Gubernatis, James E.
2015-09-01
By using the constrained-phase quantum Monte Carlo method, we performed a systematic study of the pairing correlations in the ground state of the doped Kane-Mele-Hubbard model on a honeycomb lattice. We find that pairing correlations with d + id symmetry dominate close to half filling, but pairing correlations with p+ip symmetry dominate as hole doping moves the system below three-quarters filling. We correlate these behaviors of the pairing correlations with the topology of the Fermi surfaces of the non-interacting problem. We also find that the effective pairing correlation is enhanced greatly as the interaction increases, and these superconducting correlations aremore » robust against varying the spin-orbit coupling strength. Finally, our numerical results suggest a possible way to realize spin triplet superconductivity in doped honeycomb-like materials or ultracold atoms in optical traps.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcaraz, Olga; Trullàs, Joaquim, E-mail: quim.trullas@upc.edu; Tahara, Shuta
2016-09-07
The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å{sup −1} related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.
Zhang, Yan; Jia, WenBao; Gardner, Robin; Shan, Qing; Hei, Daqian
2018-02-01
In the present work, a prompt gamma neutron activation analysis (PGNAA) setup, which consists of a 300mCi 241 Americium-Beryllium (Am-Be) neutron source and a 4 × 4-in. Bismuth germanium oxide (BGO) detector, was developed for heavy metal detection in aqueous solutions. A series of standard samples with analytical purity were prepared by dissolving heavy metals in deionized water. Quantitative spectrum analysis was performed by the Monte Carlo-Least-Squares (MCLLS) approach to measure the standard samples. The detector response functions of 4 × 4-in. BGO detector were generated by using the CEARDRF code. The element libraries were simulated in silico by the CEARCPG code, which was developed by Dr. Gardner. The simulation results presented were in very good agreement with the experimental results. The correlation coefficients were very close to 1 when the fitted spectrum was compared with the experimental spectrum. By applying the MCLLS approach, the relative deviation of the measurement accuracy was less than 2.27% for Ni, Mn, and Cu and up to 69.33% for Pb. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
Random Numbers and Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
ERIC Educational Resources Information Center
Neel, John H.; Stallings, William M.
An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, E.; Madigan, M.
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
NASA Astrophysics Data System (ADS)
Shafer, J. M.; Varljen, M. D.
1990-08-01
A fundamental requirement for geostatistical analyses of spatially correlated environmental data is the estimation of the sample semivariogram to characterize spatial correlation. Selecting an underlying theoretical semivariogram based on the sample semivariogram is an extremely important and difficult task that is subject to a great deal of uncertainty. Current standard practice does not involve consideration of the confidence associated with semivariogram estimates, largely because classical statistical theory does not provide the capability to construct confidence limits from single realizations of correlated data, and multiple realizations of environmental fields are not found in nature. The jackknife method is a nonparametric statistical technique for parameter estimation that may be used to estimate the semivariogram. When used in connection with standard confidence procedures, it allows for the calculation of closely approximate confidence limits on the semivariogram from single realizations of spatially correlated data. The accuracy and validity of this technique was verified using a Monte Carlo simulation approach which enabled confidence limits about the semivariogram estimate to be calculated from many synthetically generated realizations of a random field with a known correlation structure. The synthetically derived confidence limits were then compared to jackknife estimates from single realizations with favorable results. Finally, the methodology for applying the jackknife method to a real-world problem and an example of the utility of semivariogram confidence limits were demonstrated by constructing confidence limits on seasonal sample variograms of nitrate-nitrogen concentrations in shallow groundwater in an approximately 12-mi2 (˜30 km2) region in northern Illinois. In this application, the confidence limits on sample semivariograms from different time periods were used to evaluate the significance of temporal change in spatial correlation. This capability is quite important as it can indicate when a spatially optimized monitoring network would need to be reevaluated and thus lead to more robust monitoring strategies.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
Barber, Jared; Tanase, Roxana; Yotov, Ivan
2016-06-01
Several Kalman filter algorithms are presented for data assimilation and parameter estimation for a nonlinear diffusion model of epithelial cell migration. These include the ensemble Kalman filter with Monte Carlo sampling and a stochastic collocation (SC) Kalman filter with structured sampling. Further, two types of noise are considered -uncorrelated noise resulting in one stochastic dimension for each element of the spatial grid and correlated noise parameterized by the Karhunen-Loeve (KL) expansion resulting in one stochastic dimension for each KL term. The efficiency and accuracy of the four methods are investigated for two cases with synthetic data with and without noise, as well as data from a laboratory experiment. While it is observed that all algorithms perform reasonably well in matching the target solution and estimating the diffusion coefficient and the growth rate, it is illustrated that the algorithms that employ SC and KL expansion are computationally more efficient, as they require fewer ensemble members for comparable accuracy. In the case of SC methods, this is due to improved approximation in stochastic space compared to Monte Carlo sampling. In the case of KL methods, the parameterization of the noise results in a stochastic space of smaller dimension. The most efficient method is the one combining SC and KL expansion. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
NASA Astrophysics Data System (ADS)
Galler, Anna; Gunacker, Patrik; Tomczak, Jan; Thunström, Patrik; Held, Karsten
Recently, approaches such as the dynamical vertex approximation (D ΓA) or the dual-fermion method have been developed. These diagrammatic approaches are going beyond dynamical mean field theory (DMFT) by including nonlocal electronic correlations on all length scales as well as the local DMFT correlations. Here we present our efforts to extend the D ΓA methodology to ab-initio materials calculations (ab-initio D ΓA). Our approach is a unifying framework which includes both GW and DMFT-type of diagrams, but also important nonlocal correlations beyond, e.g. nonlocal spin fluctuations. In our multi-band implementation we are using a worm sampling technique within continuous-time quantum Monte Carlo in the hybridization expansion to obtain the DMFT vertex, from which we construct the reducible vertex function using the two particle-hole ladders. As a first application we show results for transition metal oxides. Support by the ERC project AbinitioDGA (306447) is acknowledged.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Luminosity correlations in quasars
NASA Technical Reports Server (NTRS)
Chanan, G. A.
1983-01-01
Simulations are conducted with and without flux thresholds in an investigation of quasar luminosity correlations by means of a Monte Carlo analysis, for various model distributions of quasars in X-rays and optical luminosity. For the case where the X-ray photons are primary, an anticorrelation between X-ray-to-optical luminosity ratio and optical luminosity arises as a natural consequence which resembles observations. The low optical luminosities of X-ray selected quasars can be understood as a consequence of the same effect, and similar conclusions may hold if the X-ray and optical luminosities are determined independently by a third parameter, although they do not hold if the optical photons are primary. The importance of such considerations is demonstrated through a reanalysis of the published X-ray-to-optical flux ratios for the 3CR sample.
Chemical accuracy from quantum Monte Carlo for the benzene dimer.
Azadi, Sam; Cohen, R E
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
NASA Astrophysics Data System (ADS)
Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.
2017-06-01
In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.
HepSim: A repository with predictions for high-energy physics experiments
Chekanov, S. V.
2015-02-03
A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.
Spreading of correlations in the Falicov-Kimball model
NASA Astrophysics Data System (ADS)
Herrmann, Andreas J.; Antipov, Andrey E.; Werner, Philipp
2018-04-01
We study dynamical properties of the one- and two-dimensional Falicov-Kimball model using lattice Monte Carlo simulations. In particular, we calculate the spreading of charge correlations in the equilibrium model and after an interaction quench. The results show a reduction of the light-cone velocity with interaction strength at low temperature, while the phase velocity increases. At higher temperature, the initial spreading is determined by the Fermi velocity of the noninteracting system and the maximum range of the correlations decreases with increasing interaction strength. Charge order correlations in the disorder potential enhance the range of the correlations. We also use the numerically exact lattice Monte Carlo results to benchmark the accuracy of equilibrium and nonequilibrium dynamical cluster approximation calculations. It is shown that the bias introduced by the mapping to a periodized cluster is substantial, and that from a numerical point of view, it is more efficient to simulate the lattice model directly.
NASA Astrophysics Data System (ADS)
Christodoulou, L.; Eminian, C.; Loveday, J.; Norberg, P.; Baldry, I. K.; Hurley, P. D.; Driver, S. P.; Bamford, S. P.; Hopkins, A. M.; Liske, J.; Peacock, J. A.; Bland-Hawthorn, J.; Brough, S.; Cameron, E.; Conselice, C. J.; Croom, S. M.; Frenk, C. S.; Gunawardhana, M.; Jones, D. H.; Kelvin, L. S.; Kuijken, K.; Nichol, R. C.; Parkinson, H.; Pimbblet, K. A.; Popescu, C. C.; Prescott, M.; Robotham, A. S. G.; Sharp, R. G.; Sutherland, W. J.; Taylor, E. N.; Thomas, D.; Tuffs, R. J.; van Kampen, E.; Wijesinghe, D.
2012-09-01
We measure the two-point angular correlation function of a sample of 4289 223 galaxies with r < 19.4 mag from the Sloan Digital Sky Survey (SDSS) as a function of photometric redshift, absolute magnitude and colour down to Mr - 5 log h = -14 mag. Photometric redshifts are estimated from ugriz model magnitudes and two Petrosian radii using the artificial neural network package ANNz, taking advantage of the Galaxy And Mass Assembly (GAMA) spectroscopic sample as our training set. These photometric redshifts are then used to determine absolute magnitudes and colours. For all our samples, we estimate the underlying redshift and absolute magnitude distributions using Monte Carlo resampling. These redshift distributions are used in Limber's equation to obtain spatial correlation function parameters from power-law fits to the angular correlation function. We confirm an increase in clustering strength for sub-L* red galaxies compared with ˜L* red galaxies at small scales in all redshift bins, whereas for the blue population the correlation length is almost independent of luminosity for ˜L* galaxies and fainter. A linear relation between relative bias and log luminosity is found to hold down to luminosities L ˜ 0.03L*. We find that the redshift dependence of the bias of the L* population can be described by the passive evolution model of Tegmark & Peebles. A visual inspection of a random sample from our r < 19.4 sample of SDSS galaxies reveals that about 10 per cent are spurious, with a higher contamination rate towards very faint absolute magnitudes due to over-deblended nearby galaxies. We correct for this contamination in our clustering analysis.
A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen Vernon; Moyer, Robert D.
2005-05-01
Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.
2015-11-19
Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less
Thomas B. Lynch; Rodney E. Will; Rider Reynolds
2013-01-01
Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...
High-efficiency wavefunction updates for large scale Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed
Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…
Stochastic evaluation of second-order many-body perturbation energies.
Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So
2012-11-28
With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.
Structural instability in polyacene: A projector quantum Monte Carlo study
NASA Astrophysics Data System (ADS)
Srinivasan, Bhargavi; Ramasesha, S.
1998-04-01
We have studied polyacene within the Hubbard model to explore the effect of electron correlations on the Peierls' instability in a system marginally away from one dimension. We employ the projector quantum Monte Carlo method to obtain ground-state estimates of the energy and various correlation functions. We find strong similarities between polyacene and polyacetylene which can be rationalized from the real-space valence-bond arguments of Mazumdar and Dixit. Electron correlations tend to enhance the Peierls' instability in polyacene. This enhancement appears to attain a maximum at U/t~3.0, and the maximum shifts to larger values when the alternation parameter is increased. The system shows no tendency to destroy the imposed bond-alternation pattern, as evidenced by the bond-bond correlations. The cis distortion is seen to be favored over the trans distortion. The spin-spin correlations show that undistorted polyacene is susceptible to a spin-density-wave distortion for large interaction strength. The charge-charge correlations indicate the absence of a charge-density-wave distortion for the parameters studied.
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; An, M.; Andrei, C.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buitron, S. A. I.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crkovská, J.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; De Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Di Ruzza, B.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Ducati, M. B. Gay; Germain, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Grull, F. R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hladky, J.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jercic, M.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Ljunggren, H. M.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; Torres, E. López; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Cervantes, I. Maldonado; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Mathis, A. M.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Mishra, T.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao De Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira Da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Muñoz, G. Tejeda; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Limón, S. Vergara; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Windelband, B.; Witt, W. E.; Yalcin, S.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zimmermann, S.; Zinovjev, G.; Zmeskal, J.
2017-08-01
Two-particle angular correlations were measured in pp collisions at √{s} = 7 TeV for pions, kaons, protons, and lambdas, for all particle/anti-particle combinations in the pair. Data for mesons exhibit an expected peak dominated by effects associated with mini-jets and are well reproduced by general purpose Monte Carlo generators. However, for baryon-baryon and anti-baryon-anti-baryon pairs, where both particles have the same baryon number, a near-side anti-correlation structure is observed instead of a peak. This effect is interpreted in the context of baryon production mechanisms in the fragmentation process. It currently presents a challenge to Monte Carlo models and its origin remains an open question.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
Monte Carlo study of disorder in HMTA
NASA Astrophysics Data System (ADS)
Goossens, D. J.; Welberry, T. R.
2001-12-01
We investigate disordered solids by automated fitting of a Monte Carlo simulation of a crystal to observed single-crystal diffuse X-ray scattering. This method has been extended to the study of crystals of relatively large organic molecules by using a z-matrix to describe the molecules. This allows exploration of motions within molecules. We refer to the correlated thermal motion observed in benzil, and to the occupational and thermal disorder in the 1:1 adduct of hexamethylenetetramine and azelaic acid, HMTA. The technique is capable of giving insight into modes of vibration within molecules and correlated motions between molecules.
Maiti, Saumen; Erram, V C; Gupta, Gautam; Tiwari, Ram Krishna; Kulkarni, U D; Sangpal, R R
2013-04-01
Deplorable quality of groundwater arising from saltwater intrusion, natural leaching and anthropogenic activities is one of the major concerns for the society. Assessment of groundwater quality is, therefore, a primary objective of scientific research. Here, we propose an artificial neural network-based method set in a Bayesian neural network (BNN) framework and employ it to assess groundwater quality. The approach is based on analyzing 36 water samples and inverting up to 85 Schlumberger vertical electrical sounding data. We constructed a priori model by suitably parameterizing geochemical and geophysical data collected from the western part of India. The posterior model (post-inversion) was estimated using the BNN learning procedure and global hybrid Monte Carlo/Markov Chain Monte Carlo optimization scheme. By suitable parameterization of geochemical and geophysical parameters, we simulated 1,500 training samples, out of which 50 % samples were used for training and remaining 50 % were used for validation and testing. We show that the trained model is able to classify validation and test samples with 85 % and 80 % accuracy respectively. Based on cross-correlation analysis and Gibb's diagram of geochemical attributes, the groundwater qualities of the study area were classified into following three categories: "Very good", "Good", and "Unsuitable". The BNN model-based results suggest that groundwater quality falls mostly in the range of "Good" to "Very good" except for some places near the Arabian Sea. The new modeling results powered by uncertainty and statistical analyses would provide useful constrain, which could be utilized in monitoring and assessment of the groundwater quality.
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Groh, Kim De; Kneubel, Christian A.
2014-01-01
A space experiment flown as part of the Materials International Space Station Experiment 6B (MISSE 6B) was designed to compare the atomic oxygen erosion yield (Ey) of layers of Kapton H polyimide with no spacers between layers with that of layers of Kapton H with spacers between layers. The results were compared to a solid Kapton H (DuPont, Wilmington, DE) sample. Monte Carlo computational modeling was performed to optimize atomic oxygen interaction parameter values to match the results of both the MISSE 6B multilayer experiment and the undercut erosion profile from a crack defect in an aluminized Kapton H sample flown on the Long Duration Exposure Facility (LDEF). The Monte Carlo modeling produced credible agreement with space results of increased Ey for all samples with spacers as well as predicting the space-observed enhancement in erosion near the edges of samples due to scattering from the beveled edges of the sample holders.
Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications
NASA Astrophysics Data System (ADS)
Li, Fusheng
Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder platform and has a user-friendly interface to accomplish all qualitative and quantitative tasks easily. That is to say, the software enables users to run the forward Monte Carlo simulation (if necessary) or use previously calculated Monte Carlo library spectra to obtain the sample elemental composition estimation within a minute. The GUI software is easy to use with user-friendly features and has the capability to accomplish all related tasks in a visualization environment. It can be a powerful tool for EDXRF analysts. A reproducible experiment setup has been built and experiments have been performed to benchmark the system. Two types of Standard Reference Materials (SRM), stainless steel samples from National Institute of Standards and Technology (NIST) and aluminum alloy samples from Alcoa Inc., with certified elemental compositions, are tested with this reproducible prototype system using a 109Cd radioisotope source (20mCi) and a liquid nitrogen cooled Si(Li) detector. The results show excellent agreement between the calculated sample compositions and their reference values and the approach is very fast.
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Rodrigues-Filho, J L; Abe, D S; Gatti-Junior, P; Medeiros, G R; Degani, R M; Blanco, F P; Faria, C R L; Campanelli, L; Soares, F S; Sidagis-Galli, C V; Teixeira-Silva, V; Tundisi, J E M; Matsmura-Tundisi, T; Tundisi, J G
2015-08-01
The Xingu River, one of the most important of the Amazon Basin, is characterized by clear and transparent waters that drain a 509.685 km2 watershed with distinct hydrological and ecological conditions and anthropogenic pressures along its course. As in other basins of the Amazon system, studies in the Xingu are scarce. Furthermore, the eminent construction of the Belo Monte for hydropower production, which will alter the environmental conditions in the basin in its lower middle portion, denotes high importance of studies that generate relevant information that may subsidize a more balanced and equitable development in the Amazon region. Thus, the aim of this study was to analyze the water quality in the Xingu River and its tributaries focusing on spatial patterns by the use of multivariate statistical techniques, identifying which water quality parameters were more important for the environmental changes in the watershed. Data sampling were carried out during two complete hydrological cycles in twenty-five sampling stations. The data of twenty seven variables were analyzed by Spearman's correlation coefficients, cluster analysis (CA), and principal component analysis (PCA). The results showed a high auto-correlation between variables (> 0.7). These variables were removed from multivariate analyzes because they provided redundant information about the environment. The CA resulted in the formation of six clusters, which were clearly observed in the PCA and were characterized by different water quality. The statistical results allowed to identify a high spatial variation in the water quality, which were related to specific features of the environment, different uses, influences of anthropogenic activities and geochemical characteristics of the drained basins. It was also demonstrated that most of the sampling stations in the Xingu River basin showed good water quality, due to the absence of local impacts and high power of depuration of the river itself.
Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E
2018-03-14
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
NASA Astrophysics Data System (ADS)
Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.
2018-03-01
We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.
The mean and variance of phylogenetic diversity under rarefaction
Matsen, Frederick A.
2013-01-01
Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Monte Carlo sampling of Wigner functions and surface hopping quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kube, Susanna; Lasser, Caroline; Weber, Marcus
2009-04-01
The article addresses the achievable accuracy for a Monte Carlo sampling of Wigner functions in combination with a surface hopping algorithm for non-adiabatic quantum dynamics. The approximation of Wigner functions is realized by an adaption of the Metropolis algorithm for real-valued functions with disconnected support. The integration, which is necessary for computing values of the Wigner function, uses importance sampling with a Gaussian weight function. The numerical experiments agree with theoretical considerations and show an error of 2-3%.
Quantum Monte Carlo Endstation for Petascale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlomore » code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.« less
Generalized correlation integral vectors: A distance concept for chaotic dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haario, Heikki, E-mail: heikki.haario@lut.fi; Kalachev, Leonid, E-mail: KalachevL@mso.umt.edu; Hakkarainen, Janne
2015-06-15
Several concepts of fractal dimension have been developed to characterise properties of attractors of chaotic dynamical systems. Numerical approximations of them must be calculated by finite samples of simulated trajectories. In principle, the quantities should not depend on the choice of the trajectory, as long as it provides properly distributed samples of the underlying attractor. In practice, however, the trajectories are sensitive with respect to varying initial values, small changes of the model parameters, to the choice of a solver, numeric tolerances, etc. The purpose of this paper is to present a statistically sound approach to quantify this variability. Wemore » modify the concept of correlation integral to produce a vector that summarises the variability at all selected scales. The distribution of this stochastic vector can be estimated, and it provides a statistical distance concept between trajectories. Here, we demonstrate the use of the distance for the purpose of estimating model parameters of a chaotic dynamic model. The methodology is illustrated using computational examples for the Lorenz 63 and Lorenz 95 systems, together with a framework for Markov chain Monte Carlo sampling to produce posterior distributions of model parameters.« less
Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying
2012-03-01
This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Liquid Water from First Principles: Validation of Different Sampling Approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundy, C J; Kuo, W; Siepmann, J
2004-05-20
A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to assess the validity and reproducibility of different sampling approaches. These simulations include Car-Parrinello molecular dynamics simulations using the program CPMD with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born-Oppenheimer molecular dynamics using the programs CPMD and CP2K in the microcanonical ensemble, and Metropolis Monte Carlo using CP2K in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. It is foundmore » that the structural and thermodynamic properties of these simulations are in excellent agreement with each other as long as adiabatic sampling is maintained in the Car-Parrinello molecular dynamics simulations either by choosing a sufficiently small fictitious mass in the microcanonical ensemble or by Nos{acute e}-Hoover thermostats in the canonical ensemble. Using the Becke-Lee-Yang-Parr exchange and correlation energy functionals and norm-conserving Troullier-Martins or Goedecker-Teter-Hutter pseudopotentials, simulations at a fixed density of 1.0 g/cm{sup 3} and a temperature close to 315 K yield a height of the first peak in the oxygen-oxygen radial distribution function of about 3.0, a classical constant-volume heat capacity of about 70 J K{sup -1} mol{sup -1}, and a self-diffusion constant of about 0.1 Angstroms{sup 2}/ps.« less
NASA Astrophysics Data System (ADS)
Yung, Lai Chin; Fei, Cheong Choke; Mandeep, Jit Singh; Amin, Nowshad; Lai, Khin Wee
2015-11-01
The leadframe fabrication process normally involves additional thin-metal layer plating on the bulk copper substrate surface for wire bonding purposes. Silver, tin, and copper flakes are commonly adopted as plating materials. It is critical to assess the density of the plated metal layer, and in particular to look for porosity or voids underneath the layer, which may reduce the reliability during high-temperature stress. A fast, reliable inspection technique is needed to assess the porosity or void weakness. To this end, the characteristics of x-rays generated from bulk samples were examined using an energy-dispersive x-ray (EDX) detector to examine the porosity percentage. Monte Carlo modeling was integrated with Castaing's formula to verify the integrity of the experimental data. Samples with different porosity percentages were considered to test the correlation between the intensity of the collected x-ray signal and the material density. To further verify the integrity of the model, conventional cross-sectional samples were also taken to observe the porosity percentage using Image J software measurement. A breakthrough in bulk substrate assessment was achieved by applying EDX for the first time to nonelemental analysis. The experimental data showed that the EDX features were not only useful for elemental analysis, but also applicable to thin-film metal layer thickness measurement and bulk material density determination. A detailed experiment was conducted using EDX to assess the plating metal layer and bulk material porosity.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Towards ab initio Calculations with the Dynamical Vertex Approximation
NASA Astrophysics Data System (ADS)
Galler, Anna; Kaufmann, Josef; Gunacker, Patrik; Pickem, Matthias; Thunström, Patrik; Tomczak, Jan M.; Held, Karsten
2018-04-01
While key effects of the many-body problem — such as Kondo and Mott physics — can be understood in terms of on-site correlations, non-local fluctuations of charge, spin, and pairing amplitudes are at the heart of the most fascinating and unresolved phenomena in condensed matter physics. Here, we review recent progress in diagrammatic extensions to dynamical mean-field theory for ab initio materials calculations. We first recapitulate the quantum field theoretical background behind the two-particle vertex. Next we discuss latest algorithmic advances in quantum Monte Carlo simulations for calculating such two-particle quantities using worm sampling and vertex asymptotics, before giving an introduction to the ab initio dynamical vertex approximation (AbinitioDΓA). Finally, we highlight the potential of AbinitioDΓA by detailing results for the prototypical correlated metal SrVO3.
A Monte Carlo risk assessment model for acrylamide formation in French fries.
Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel
2009-10-01
The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.
Numerical simulations of strongly correlated electron and spin systems
NASA Astrophysics Data System (ADS)
Changlani, Hitesh Jaiprakash
Developing analytical and numerical tools for strongly correlated systems is a central challenge for the condensed matter physics community. In the absence of exact solutions and controlled analytical approximations, numerical techniques have often contributed to our understanding of these systems. Exact Diagonalization (ED) requires the storage of at least two vectors the size of the Hilbert space under consideration (which grows exponentially with system size) which makes it affordable only for small systems. The Density Matrix Renormalization Group (DMRG) uses an intelligent Hilbert space truncation procedure to significantly reduce this cost, but in its present formulation is limited to quasi-1D systems. Quantum Monte Carlo (QMC) maps the Schrodinger equation to the diffusion equation (in imaginary time) and only samples the eigenvector over time, thereby avoiding the memory limitation. However, the stochasticity involved in the method gives rise to the "sign problem" characteristic of fermion and frustrated spin systems. The first part of this thesis is an effort to make progress in the development of a numerical technique which overcomes the above mentioned problems. We consider novel variational wavefunctions, christened "Correlator Product States" (CPS), that have a general functional form which hopes to capture essential correlations in the ground states of spin and fermion systems in any dimension. We also consider a recent proposal to modify projector (Green's Function) Quantum Monte Carlo to ameliorate the sign problem for realistic and model Hamiltonians (such as the Hubbard model). This exploration led to our own set of improvements, primarily a semistochastic formulation of projector Quantum Monte Carlo. Despite their limitations, existing numerical techniques can yield physical insights into a wide variety of problems. The second part of this thesis considers one such numerical technique - DMRG - and adapts it to study the Heisenberg antiferromagnet on a generic tree graph. Our attention turns to a systematic numerical and semi-analytical study of the effect of local even/odd sublattice imbalance on the low energy spectrum of antiferromagnets on regular Cayley trees. Finally, motivated by previous experiments and theories of randomly diluted antiferromagnets (where an even/odd sublattice imbalance naturally occurs), we present our study of the Heisenberg antiferromagnet on the Cayley tree at the percolation threshold. Our work shows how to detect "emergent" low energy degrees of freedom and compute the effective interactions between them by using data from DMRG calculations.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
ERIC Educational Resources Information Center
Wang, Wen-Chung
2004-01-01
The Pearson correlation is used to depict effect sizes in the context of item response theory. Amultidimensional Rasch model is used to directly estimate the correlation between latent traits. Monte Carlo simulations were conducted to investigate whether the population correlation could be accurately estimated and whether the bootstrap method…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokár, K.; Derian, R.; Mitas, L.
Using explicitly correlated fixed-node quantum Monte Carlo and density functional theory (DFT) methods, we study electronic properties, ground-state multiplets, ionization potentials, electron affinities, and low-energy fragmentation channels of charged half-sandwich and multidecker vanadium-benzene systems with up to 3 vanadium atoms, including both anions and cations. It is shown that, particularly in anions, electronic correlations play a crucial role; these effects are not systematically captured with any commonly used DFT functionals such as gradient corrected, hybrids, and range-separated hybrids. On the other hand, tightly bound cations can be described qualitatively by DFT. A comparison of DFT and quantum Monte Carlo providesmore » an in-depth understanding of the electronic structure and properties of these correlated systems. The calculations also serve as a benchmark study of 3d molecular anions that require a balanced many-body description of correlations at both short- and long-range distances.« less
A variational Monte Carlo study of different spin configurations of electron-hole bilayer
NASA Astrophysics Data System (ADS)
Sharma, Rajesh O.; Saini, L. K.; Bahuguna, Bhagwati Prasad
2018-05-01
We report quantum Monte Carlo results for mass-asymmetric electron-hole bilayer (EHBL) system with different-different spin configurations. Particularly, we apply a variational Monte Carlo method to estimate the ground-state energy, condensate fraction and pair-correlations function at fixed density rs = 5 and interlayer distance d = 1 a.u. We find that spin-configuration of EHBL system, which consists of only up-electrons in one layer and down-holes in other i.e. ferromagnetic arrangement within layers and anti-ferromagnetic across the layers, is more stable than the other spin-configurations considered in this study.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Adam, J.; Adamová, D.; Aggarwal, M. M.; ...
2017-08-24
We measured two-particle angular correlations in pp collisions at √s=7 TeV for pions, kaons, protons, and lambdas, for all particle/anti-particle combinations in the pair. Data for mesons exhibit an expected peak dominated by effects associated with mini-jets and are well reproduced by general purpose Monte Carlo generators. However, for baryon–baryon and anti-baryon–anti-baryon pairs, where both particles have the same baryon number, a near-side anti-correlation structure is observed instead of a peak. This effect is interpreted in the context of baryon production mechanisms in the fragmentation process. It currently presents a challenge to Monte Carlo models and its origin remains an openmore » question.« less
Complete event simulations of nuclear fission
NASA Astrophysics Data System (ADS)
Vogt, Ramona
2015-10-01
For many years, the state of the art for treating fission in radiation transport codes has involved sampling from average distributions. In these average fission models energy is not explicitly conserved and everything is uncorrelated because all particles are emitted independently. However, in a true fission event, the energies, momenta and multiplicities of the emitted particles are correlated. Such correlations are interesting for many modern applications. Event-by-event generation of complete fission events makes it possible to retain the kinematic information for all particles emitted: the fission products as well as prompt neutrons and photons. It is therefore possible to extract any desired correlation observables. Complete event simulations can be included in general Monte Carlo transport codes. We describe the general functionality of currently available fission event generators and compare results for several important observables. This work was performed under the auspices of the US DOE by LLNL, Contract DE-AC52-07NA27344. We acknowledge support of the Office of Defense Nuclear Nonproliferation Research and Development in DOE/NNSA.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
Monte Carlo analysis of neutron diffuse scattering data
NASA Astrophysics Data System (ADS)
Goossens, D. J.; Heerdegen, A. P.; Welberry, T. R.; Gutmann, M. J.
2006-11-01
This paper presents a discussion of a technique developed for the analysis of neutron diffuse scattering data. The technique involves processing the data into reciprocal space sections and modelling the diffuse scattering in these sections. A Monte Carlo modelling approach is used in which the crystal energy is a function of interatomic distances between molecules and torsional rotations within molecules. The parameters of the model are the spring constants governing the interactions, as they determine the correlations which evolve when the model crystal structure is relaxed at finite temperature. When the model crystal has reached equilibrium its diffraction pattern is calculated and a χ2 goodness-of-fit test between observed and calculated data slices is performed. This allows a least-squares refinement of the fit parameters and so automated refinement can proceed. The first application of this methodology to neutron, rather than X-ray, data is outlined. The sample studied was deuterated benzil, d-benzil, C14D10O2, for which data was collected using time-of-flight Laue diffraction on SXD at ISIS.
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
NASA Astrophysics Data System (ADS)
Griffin, Patrick; Rochman, Dimitri; Koning, Arjan
2017-09-01
A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Steen Magnussen
2009-01-01
Areas burned annually in 29 Canadian forest fire regions show a patchy and irregular correlation structure that significantly influences the distribution of annual totals for Canada and for groups of regions. A binary Monte Carlo Markov Chain (MCMC) is constructed for the purpose of joint simulation of regional areas burned in forest fires. For each year the MCMC...
NASA Astrophysics Data System (ADS)
Schmitz, R.; Yordanov, S.; Butt, H. J.; Koynov, K.; Dünweg, B.
2011-12-01
Total internal reflection fluorescence cross-correlation spectroscopy (TIR-FCCS) has recently [S. Yordanov , Optics ExpressOPEXFF1094-408710.1364/OE.17.021149 17, 21149 (2009)] been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence occurs only for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions provide only rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. First, Brownian dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Second, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than 10nm, and, within the limitations of the experiments and the model, indistinguishable from zero.
NASA Astrophysics Data System (ADS)
Bhattacharyya, Swarnapratim; Haiduc, Maria; Neagu, Alina Tania; Firu, Elena
2014-07-01
We have presented a systematic study of two-particle rapidity correlations in terms of investigating the dynamical fluctuation observable \\sigma _c^2 in the forward-backward pseudo-rapidity windows by analyzing the experimental data of {}_{}^{16} O{--}AgBr interactions at 4.5 AGeV/c, {}_{}^{22} Ne{--}AgBr interactions at 4.1 AGeV/c, {}_{}^{28} Si{--}AgBr and {}_{}^{32} S{--}AgBr interactions at 4.5 AGeV/c. The experimental results have been compared with the results obtained from the analysis of event sample simulated (MC-RAND) by generating random numbers and also with the analysis of events generated by the UrQMD and AMPT model. Our study confirms the presence of strong short-range correlations among the produced particles in the forward and the backward pseudo-rapidity region. The analysis of the simple Monte Carlo-simulated (MC-RAND) events signifies that the observed correlations are not due to mere statistics only; explanation of such correlations can be attributed to the presence of dynamical fluctuations during the production of charged pions. Comparisons of the experimental results with the results obtained from analyzing the UrQMD data sample indicate that the UrQMD model cannot reproduce the experimental findings. The AMPT model also cannot explain the experimental results satisfactorily. Comparisons of our experimental results with the results obtained from the analysis of higher energy emulsion data and with the results of the RHIC data have also been presented.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
ERIC Educational Resources Information Center
Carter, David S.
1979-01-01
There are a variety of formulas for reducing the positive bias which occurs in estimating R squared in multiple regression or correlation equations. Five different formulas are evaluated in a Monte Carlo study, and recommendations are made. (JKS)
Patra, Chandra N
2014-11-14
A systematic investigation of the spherical electric double layers with the electrolytes having size as well as charge asymmetry is carried out using density functional theory and Monte Carlo simulations. The system is considered within the primitive model, where the macroion is a structureless hard spherical colloid, the small ions as charged hard spheres of different size, and the solvent is represented as a dielectric continuum. The present theory approximates the hard sphere part of the one particle correlation function using a weighted density approach whereas a perturbation expansion around the uniform fluid is applied to evaluate the ionic contribution. The theory is in quantitative agreement with Monte Carlo simulation for the density and the mean electrostatic potential profiles over a wide range of electrolyte concentrations, surface charge densities, valence of small ions, and macroion sizes. The theory provides distinctive evidence of charge and size correlations within the electrode-electrolyte interface in spherical geometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Lam, Adam; Brantley, Patrick; Malvagi, Fausto; Palmer, Todd; Zoia, Andrea
2018-01-01
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore-Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using this algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
Larmier, Coline; Lam, Adam; Brantley, Patrick; ...
2017-09-27
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmier, Coline; Lam, Adam; Brantley, Patrick
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Hydrologic Model Selection using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Marshall, L.; Sharma, A.; Nott, D.
2002-12-01
Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.
Local Linear Regression for Data with AR Errors.
Li, Runze; Li, Yan
2009-07-01
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Anomalous Protein-Protein Interactions in Multivalent Salt Solution.
Pasquier, Coralie; Vazdar, Mario; Forsman, Jan; Jungwirth, Pavel; Lund, Mikael
2017-04-13
The stability of aqueous protein solutions is strongly affected by multivalent ions, which induce ion-ion correlations beyond the scope of classical mean-field theory. Using all-atom molecular dynamics (MD) and coarse grained Monte Carlo (MC) simulations, we investigate the interaction between a pair of protein molecules in 3:1 electrolyte solution. In agreement with available experimental findings of "reentrant protein condensation", we observe an anomalous trend in the protein-protein potential of mean force with increasing electrolyte concentration in the order: (i) double-layer repulsion, (ii) ion-ion correlation attraction, (iii) overcharge repulsion, and in excess of 1:1 salt, (iv) non Coulombic attraction. To efficiently sample configurational space we explore hybrid continuum solvent models, applicable to many-protein systems, where weakly coupled ions are treated implicitly, while strongly coupled ones are treated explicitly. Good agreement is found with the primitive model of electrolytes, as well as with atomic models of protein and solvent.
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
The cosmic X-ray background-IRAS galaxy correlation and the local X-ray volume emissivity
NASA Technical Reports Server (NTRS)
Miyaji, Takamitsu; Lahav, Ofer; Jahoda, Keith; Boldt, Elihu
1994-01-01
We have cross-correlated the galaxies from the IRAS 2 Jy redshift survey sample and the 0.7 Jy projected sample with the all-sky cosmic X-ray background (CXB) map obtained from the High Energy Astronomy Observatory (HEAO) 1 A-2 experiment. We have detected a significant correlation signal between surface density of IRAS galaxies and the X-ray background intensity, with W(sub xg) = (mean value of ((delta I)(delta N)))/(mean value of I)(mean value of N)) of several times 10(exp -3). While this correlation signal has a significant implication for the contribution of the local universe to the hard (E greater than 2 keV) X-ray background, its interpretation is model-dependent. We have developed a formulation to model the cross-correlation between CXB surface brightness and galaxy counts. This includes the effects of source clustering and the X-ray-far-infrared luminosity correlation. Using an X-ray flux-limited sample of active galactic nuclei (AGNs), which has IRAS 60 micrometer measurements, we have estimated the contribution of the AGN component to the observed CXB-IRAS galaxy count correlations in order to see whether there is an excess component, i.e., contribution from low X-ray luminosity sources. We have applied both the analytical approach and Monte Carlo simulations for the estimations. Our estimate of the local X-ray volume emissivity in the 2-10 keV band is rho(sub x) approximately = (4.3 +/- 1.2) x 10(exp 38) h(sub 50) ergs/s/cu Mpc, consistent with the value expected from the luminosity function of AGNs alone. This sets a limit to the local volume emissivity from lower luminosity sources (e.g., star-forming galaxies, low-ionization nuclear emission-line regions (LINERs)) to rho(sub x) less than or approximately = 2 x 10(exp 38) h(sub 50) ergs/s/cu Mpc.
Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta
2016-03-01
This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Broecker, Peter; Trebst, Simon
2016-08-01
Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-08-01
A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nonglobal correlations in collider physics
Moult, Ian; Larkoski, Andrew J.
2016-01-13
Despite their importance for precision QCD calculations, correlations between in- and out-of-jet regions of phase space have never directly been observed. These so-called non-global effects are present generically whenever a collider physics measurement is not explicitly dependent on radiation throughout the entire phase space. In this paper, we introduce a novel procedure based on mutual information, which allows us to isolate these non-global correlations between measurements made in different regions of phase space. We study this procedure both analytically and in Monte Carlo simulations in the context of observables measured on hadronic final states produced in e+e- collisions, though itmore » is more widely applicable.The procedure exploits the sensitivity of soft radiation at large angles to non-global correlations, and we calculate these correlations through next-to-leading logarithmic accuracy. The bulk of these non-global correlations are found to be described in Monte Carlo simulation. They increase by the inclusion of non-perturbative effects, which we show can be incorporated in our calculation through the use of a model shape function. As a result, this procedure illuminates the source of non-global correlations and has connections more broadly to fundamental quantities in quantum field theory.« less
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Furlow, Carolyn F.; Beretvas, S. Natasha
2005-01-01
Three methods of synthesizing correlations for meta-analytic structural equation modeling (SEM) under different degrees and mechanisms of missingness were compared for the estimation of correlation and SEM parameters and goodness-of-fit indices by using Monte Carlo simulation techniques. A revised generalized least squares (GLS) method for…
Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality
ERIC Educational Resources Information Center
Bishara, Anthony J.; Hittner, James B.
2015-01-01
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
ERIC Educational Resources Information Center
Hafdahl, Adam R.; Williams, Michelle A.
2009-01-01
In 2 Monte Carlo studies of fixed- and random-effects meta-analysis for correlations, A. P. Field (2001) ostensibly evaluated Hedges-Olkin-Vevea Fisher-[zeta] and Schmidt-Hunter Pearson-r estimators and tests in 120 conditions. Some authors have cited those results as evidence not to meta-analyze Fisher-[zeta] correlations, especially with…
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
A molecular scale perspective: Monte Carlo simulation for rupturing of ultra thin polymer film melts
NASA Astrophysics Data System (ADS)
Singh, Satya Pal
2017-04-01
Monte Carlo simulation has been performed to study the rupturing process of thin polymer film under strong confinement. The change in mean square displacement; pair correlation function; density distribution; average bond length and microscopic viscosity are sampled by varying the molecular interaction parameters such as the strength and the equilibrium positions of the bonding, non-bonding potentials and the sizes of the beads. The variation in mean square angular displacement χθ = [ < Δθ2 > - < Δθ>2 ] fits very well to a function of type y (t) = A + B *e-t/τ. This may help to study the viscous properties of the films and its dependence on different parameters. The ultra thin film annealed at high temperature gets ruptured and holes are created in the film mimicking spinodal dewetting. The pair correlation function and density profile reveal rich information about the equilibrium structure of the film. The strength and equilibrium bond length of finite extensible non-linear elastic potential (FENE) and non-bonding Morse potential have clear impact on microscopic rupturing of the film. The beads show Rouse or repetition motion forming rim like structures near the holes created inside the film. The higher order interaction as dipole-quadrupole may get prominence under strong confinement. The enhanced excluded volume interaction under strong confinement may overlap with the molecular dispersion forces. It can work to reorganize the molecules at the bottom of the scale and can imprint its signature in complex patterns evolved.
Max-Moerbeck, W.; Hovatta, T.; Richards, J. L.; ...
2014-09-22
In order to determine the location of the gamma-ray emission site in blazars, we investigate the time-domain relationship between their radio and gamma-ray emission. Light-curves for the brightest detected blazars from the first 3 years of the mission of the Fermi Gamma-ray Space Telescope are cross-correlated with 4 years of 15GHz observations from the OVRO 40-m monitoring program. The large sample and long light-curve duration enable us to carry out a statistically robust analysis of the significance of the cross-correlations, which is investigated using Monte Carlo simulations including the uneven sampling and noise properties of the light-curves. Modeling the light-curvesmore » as red noise processes with power-law power spectral densities, we find that only one of 41 sources with high quality data in both bands shows correlations with significance larger than 3σ (AO0235+164), with only two more larger than even 2.25σ (PKS 1502+106 and B2 2308+34). Additionally, we find correlated variability in Mrk 421 when including a strong flare that occurred in July-September 2012. These results demonstrate very clearly the difficulty of measuring statistically robust multiwavelength correlations and the care needed when comparing light-curves even when many years of data are used. This should be a caution. In all four sources the radio variations lag the gamma-ray variations, suggesting that the gamma-ray emission originates upstream of the radio emission. Continuous simultaneous monitoring over a longer time period is required to obtain high significance levels in cross-correlations between gamma-ray and radio variability in most blazars.« less
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Reactive Monte Carlo sampling with an ab initio potential
NASA Astrophysics Data System (ADS)
Leiding, Jeff; Coe, Joshua D.
2016-05-01
We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Perez, Danny; Junghans, Christoph
2014-03-01
We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.
Validation of a Monte Carlo Simulation of Binary Time Series.
1981-09-18
the probability distribution corresponding to the population from which the n sample vectors are generated. Simple unbiased estimators were chosen for...Cowcept A s*us Agew Bethesd, Marylnd H. L. Wauom Am D. RoQuE SymMS Reserch Brach , p" Ssms Delsbian September 18, 1981 DTIC EL E C T E SEP 24 =I98ST...is generated from the sample of such vectors produced by several independent replications of the Monte Carlo simulation. Then the validity of the
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.
2016-01-01
A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221
RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO
GOEL, SHARAD; SALGANIK, MATTHEW J.
2013-01-01
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381
NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
Tracing footprints of environmental events in tree ring chemistry using neutron activation analysis
NASA Astrophysics Data System (ADS)
Sahin, Dagistan
The aim of this study is to identify environmental effects on tree-ring chemistry. It is known that industrial pollution, volcanic eruptions, dust storms, acid rain and similar events can cause substantial changes in soil chemistry. Establishing whether a particular group of trees is sensitive to these changes in soil environment and registers them in the elemental chemistry of contemporary growth rings is the over-riding goal of any Dendrochemistry research. In this study, elemental concentrations were measured in tree-ring samples of absolutely dated eleven modern forest trees, grown in the Mediterranean region, Turkey, collected and dated by the Malcolm and Carolyn Wiener Laboratory for Aegean and Near Eastern Dendrochronology laboratory at Cornell University. Correlations between measured elemental concentrations in the tree-ring samples were analyzed using statistical tests to answer two questions. Does the current concentration of a particular element depend on any other element within the tree? And, are there any elements showing correlated abnormal concentration changes across the majority of the trees? Based on the detailed analysis results, the low mobility of sodium and bromine, positive correlations between calcium, zinc and manganese, positive correlations between trace elements lanthanum, samarium, antimony, and gold within tree-rings were recognized. Moreover, zinc, lanthanum, samarium and bromine showed strong, positive correlations among the trees and were identified as possible environmental signature elements. New Dendrochemistry information found in this study would be also useful in explaining tree physiology and elemental chemistry in Pinus nigra species grown in Turkey. Elemental concentrations in tree-ring samples were measured using Neutron Activation Analysis (NAA) at the Pennsylvania State University Radiation Science and Engineering Center (RSEC). Through this study, advanced methodologies for methodological, computational and experimental NAA were developed to ensure an acceptable accuracy and certainty in the elemental concentration measurements in tree-ring samples. Two independent analysis methods of NAA were used; the well known k-zero method and a novel method developed in this study, called the Multi-isotope Iterative Westcott (MIW) method. The MIW method uses reaction rate probabilities for a group of isotopes, which can be calculated by a neutronic simulation or measured by experimentation, and determines the representative values for the neutron flux and neutron flux characterization parameters based on Westcott convention. Elemental concentration calculations for standard reference material and tree-ring samples were then performed using the MIW and k-zero analysis methods of the NAA and the results were cross verified. In the computational part of this study, a detailed burnup coupled neutronic simulation was developed to analyze real-time neutronic changes in a TRIGA Mark III reactor core, in this study, the Penn State Breazeale Reactor (PSBR) core. To the best of the author`s knowledge, this is the first burnup coupled neutronic simulation with realistic time steps and full fuel temperature profile for a TRIGA reactor using Monte Carlo Utility for Reactor Evolutions (MURE) code and Monte Carlo Neutral-Particle Code (MCNP) coupling. High fidelity and flexibility in the simulation was aimed to replicate the real core operation through the day. This approach resulted in an enhanced accuracy in neutronic representation of the PSBR core with respect to previous neutronic simulation models for the PSBR core. An important contribution was made in the NAA experimentation practices employed in Dendrochemistry studies at the RSEC. Automated laboratory control and analysis software for NAA measurements in the RSEC Radionuclide Applications Laboratory was developed. Detailed laboratory procedures were written in this study comprising preparation, handling and measurements of tree-ring samples in the Radionuclide Applications Laboratory.
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
2006-01-01
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time
NASA Astrophysics Data System (ADS)
Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.
2017-12-01
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
Structural correlation of the chalcogenide Ge40Se60 glass
NASA Astrophysics Data System (ADS)
Moharram, A. H.
2017-01-01
Binary Ge40Se60 glass was prepared using the melt-quench technique. The total structure factors, S( K), are obtained using the X-ray diffraction in the wave vector interval 0.28 ≤ K ≤ 6.5 Å-1. The appearance of the first sharp diffraction peak (FSDP) in the structure factor indicates the presence of the intermediate range order. Radial distribution functions, RDF( r), have been obtained using either the conventional (Fourier) transformation or the Monte Carlo simulation of the experimental X-ray data. The short range order parameters deduced from the Monte Carlo total correlation, T( r), functions are better than those obtained from the conventional (Fourier) T( r) data. Gaussian analyses of the total correlation function show that Ge2(Se1/2)6 molecular units are the basic structural units for the investigated Ge40Se60 glass.
Path integral Monte Carlo and the electron gas
NASA Astrophysics Data System (ADS)
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.
Combining Heterogeneous Correlation Matrices: Simulation Analysis of Fixed-Effects Methods
ERIC Educational Resources Information Center
Hafdahl, Adam R.
2008-01-01
Monte Carlo studies of several fixed-effects methods for combining and comparing correlation matrices have shown that two refinements improve estimation and inference substantially. With rare exception, however, these simulations have involved homogeneous data analyzed using conditional meta-analytic procedures. The present study builds on…
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
Bold-line Monte Carlo and the nonequilibrium physics of strongly correlated many-body systems
NASA Astrophysics Data System (ADS)
Cohen, Guy
2015-03-01
This talk summarizes real time bold-line diagrammatic Monte-Carlo approaches to quantum impurity models, which make significant headway against the sign problem by summing over corrections to self-consistent diagrammatic expansions rather than a bare diagrammatic series. When the bold-line method is combined with reduced dynamics techniques both local single-time properties and two time correlators such as Green functions can be computed at very long timescales, enabling studies of nonequilibrium steady state behavior of quantum impurity models and creating new solvers for nonequilibrium dynamical mean field theory. This work is supported by NSF DMR 1006282, NSF CHE-1213247, DOE ER 46932, TG-DMR120085 and TG-DMR130036, and the Yad Hanadiv-Rothschild Foundation.
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
Inferential Procedures for Correlation Coefficients Corrected for Attenuation.
ERIC Educational Resources Information Center
Hakstian, A. Ralph; And Others
1988-01-01
A model and computation procedure based on classical test score theory are presented for determination of a correlation coefficient corrected for attenuation due to unreliability. Delta and Monte Carlo method applications are discussed. A power analysis revealed no serious loss in efficiency resulting from correction for attentuation. (TJH)
Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.
ERIC Educational Resources Information Center
Cohen, Jacob; Nee, John C. M.
1990-01-01
The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)
Neutron polarization analysis study of the frustrated magnetic ground state of β-Mn1-xAlx
NASA Astrophysics Data System (ADS)
Stewart, J. R.; Andersen, K. H.; Cywinski, R.
2008-07-01
We have performed a neutron polarization analysis study of the short-range nuclear and magnetic correlations present in the dilute alloy, β-Mn1-xAlx with 0.03≤x≤0.16 , in order to study the evolution of the magnetic ground state of this system as it achieves static spin-glass order at concentrations x>0.09 . To this end we have developed a reverse-Monte Carlo algorithm which has enabled us to extract Warren-Cowley nuclear short-range order parameters and magnetic spin correlations. Using conventional neutron powder diffraction, we show that the nonmagnetic Al substituents preferentially occupy the magnetic site II Wyckoff positions in the β-Mn structure—resulting in a reduction of the magnetic topological frustration of the Mn atoms. These Al impurities are found to display strong anticlustering behavior. The magnetic spin correlations are predominantly antiferromagnetic, persisting over a short range which is similar for all the samples studied—above and below the spin-liquid-spin-glass boundary—while the observed static (disordered) moment is shown to increase with increasing Al concentration.
Pothoczki, Szilvia; Temleitner, László; Pusztai, László
2014-02-07
Synchrotron X-ray diffraction measurements have been conducted on liquid phosphorus trichloride, tribromide, and triiodide. Molecular Dynamics simulations for these molecular liquids were performed with a dual purpose: (1) to establish whether existing intermolecular potential functions can provide a picture that is consistent with diffraction data and (2) to generate reliable starting configurations for subsequent Reverse Monte Carlo modelling. Structural models (i.e., sets of coordinates of thousands of atoms) that were fully consistent with experimental diffraction information, within errors, have been prepared by means of the Reverse Monte Carlo method. Comparison with reference systems, generated by hard sphere-like Monte Carlo simulations, was also carried out to demonstrate the extent to which simple space filling effects determine the structure of the liquids (and thus, also estimating the information content of measured data). Total scattering structure factors, partial radial distribution functions and orientational correlations as a function of distances between the molecular centres have been calculated from the models. In general, more or less antiparallel arrangements of the primary molecular axes that are found to be the most favourable orientation of two neighbouring molecules. In liquid PBr3 electrostatic interactions seem to play a more important role in determining intermolecular correlations than in the other two liquids; molecular arrangements in both PCl3 and PI3 are largely driven by steric effects.
The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.
Temleitner, László; Pusztai, László; Schweika, Werner
2007-08-22
The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
1979-01-01
A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)
Unifying neural-network quantum states and correlator product states via tensor networks
NASA Astrophysics Data System (ADS)
Clark, Stephen R.
2018-04-01
Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose (unnormalised) amplitudes in a fixed basis can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer (2017 Science 355 602) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators making them uniquely unbiased geometrically. The appearance of GHZ correlators also relates NQS to canonical polyadic decompositions of tensors. Another immediate implication of the NQS equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superpositions of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip
2018-06-01
Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Respondent-driven sampling as Markov chain Monte Carlo.
Goel, Sharad; Salganik, Matthew J
2009-07-30
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.
Dynamic Event Tree advancements and control logic improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less
Wang, Lei; Troyer, Matthias
2014-09-12
We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Multiple-time-stepping generalized hybrid Monte Carlo methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escribano, Bruno, E-mail: bescribano@bcamath.org; Akhmatskaya, Elena; IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC).more » The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.« less
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
Quantum Monte Carlo calculations of van der Waals interactions between aromatic benzene rings
NASA Astrophysics Data System (ADS)
Azadi, Sam; Kühne, T. D.
2018-05-01
The magnitude of finite-size effects and Coulomb interactions in quantum Monte Carlo simulations of van der Waals interactions between weakly bonded benzene molecules are investigated. To that extent, two trial wave functions of the Slater-Jastrow and Backflow-Slater-Jastrow types are employed to calculate the energy-volume equation of state. We assess the impact of the backflow coordinate transformation on the nonlocal correlation energy. We found that the effect of finite-size errors in quantum Monte Carlo calculations on energy differences is particularly large and may even be more important than the employed trial wave function. In addition to the cohesive energy, the singlet excitonic energy gap and the energy gap renormalization of crystalline benzene at different densities are computed.
Lithium in rocks from the Lincoln, Helena, and Townsend areas, Montana
Brenner-Tourtelot, Elizabeth F.; Meier, Allen L.; Curtis, Craig A.
1978-01-01
In anticipation of increased demand for lithium for energy-related uses, the U.S. Geological Survey has been appraising the lithium resources of the United States and investigating occurrences of lithium. Analyses of samples of chiefly lacustrine rocks of Oligocene age collected by M. R. Mudge near Lincoln, Mont. showed as much as 1,500 ppm lithium. Since then we have sampled the area in greater detail, and have sampled rocks of similar ages in the Helena and Townsend valleys. The lithium-rich beds crop out in a band about 1.3 km long by 0.3 km wide near the head of Beaver Creek, about 14 km northwest of Lincoln, Mont. These beds consist of laminated marlstone, oil shale, carbonaceous shale, limestone, conglomerate, and tuff. Some parts of this sequence average almost 0.1 percent lithium. The lithium-bearing rocks are too low in grade and volume to be economic. Samples of sedimentary rocks of Oligocene age from the Helena and Townsend valleys in the vicinity of Helena, Mont. were generally low in lithium (3-40 ppm). However, samples of rhyolites from the western side of the Helena valley and from the Lava Mountain area were slightly above average in lithium content (6-200 ppm).
Chuvochina, Maria S; Marie, Dominique; Chevaillier, Servanne; Petit, Jean-Robert; Normand, Philippe; Alekhina, Irina A; Bulat, Sergey A
2011-01-01
Microorganisms uplifted during dust storms survive long-range transport in the atmosphere and could colonize high-altitude snow. Bacterial communities in alpine snow on a Mont Blanc glacier, associated with four depositions of Saharan dust during the period 2006-2009, were studied using 16S rRNA gene sequencing and flow cytometry. Also, sand from the Tunisian Sahara, Saharan dust collected in Grenoble and Mont Blanc snow containing no Saharan dust (one sample of each) were analyzed. The bacterial community composition varied significantly in snow containing four dust depositions over a 3-year period. Out of 61 phylotypes recovered from dusty snow, only three phylotypes were detected in more than one sample. Overall, 15 phylotypes were recognized as potential snow colonizers. For snow samples, these phylotypes belonged to Actinobacteria, Proteobacteria and Cyanobacteria, while for Saharan sand/dust samples they belonged to Actinobacteria, Bacteroidetes, Deinococcus-Thermus and Proteobacteria. Thus, regardless of the time-scale, Saharan dust events can bring different microbiota with no common species set to alpine glaciers. This seems to be defined more by event peculiarities and aeolian transport conditions than by the bacterial load from the original dust source.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
NASA Astrophysics Data System (ADS)
Dupuy, Nicolas; Casula, Michele
2018-04-01
By means of the Jastrow correlated antisymmetrized geminal power (JAGP) wave function and quantum Monte Carlo (QMC) methods, we study the ground state properties of the oligoacene series, up to the nonacene. The JAGP is the accurate variational realization of the resonating-valence-bond (RVB) ansatz proposed by Pauling and Wheland to describe aromatic compounds. We show that the long-ranged RVB correlations built in the acenes' ground state are detrimental for the occurrence of open-shell diradical or polyradical instabilities, previously found by lower-level theories. We substantiate our outcome by a direct comparison with another wave function, tailored to be an open-shell singlet (OSS) for long-enough acenes. By comparing on the same footing the RVB and OSS wave functions, both optimized at a variational QMC level and further projected by the lattice regularized diffusion Monte Carlo method, we prove that the RVB wave function has always a lower variational energy and better nodes than the OSS, for all molecular species considered in this work. The entangled multi-reference RVB state acts against the electron edge localization implied by the OSS wave function and weakens the diradical tendency for higher oligoacenes. These properties are reflected by several descriptors, including wave function parameters, bond length alternation, aromatic indices, and spin-spin correlation functions. In this context, we propose a new aromatic index estimator suitable for geminal wave functions. For the largest acenes taken into account, the long-range decay of the charge-charge correlation functions is compatible with a quasi-metallic behavior.
ERIC Educational Resources Information Center
Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.
2014-01-01
Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…
Benali, Anouar; Shulenburger, Luke; Krogel, Jaron T.; ...
2016-06-07
The Magneli phase Ti 4O 7 is an important transition metal oxide with a wide range of applications because of its interplay between charge, spin, and lattice degrees of freedom. At low temperatures, it has non-trivial magnetic states very close in energy, driven by electronic exchange and correlation interactions. We have examined three low- lying states, one ferromagnetic and two antiferromagnetic, and calculated their energies as well as Ti spin moment distributions using highly accurate Quantum Monte Carlo methods. We compare our results to those obtained from density functional theory- based methods that include approximate corrections for exchange and correlation.more » Our results confirm the nature of the states and their ordering in energy, as compared with density-functional theory methods. However, the energy differences and spin distributions differ. Here, a detailed analysis suggests that non-local exchange-correlation functionals, in addition to other approximations such as LDA+U to account for correlations, are needed to simultaneously obtain better estimates for spin moments, distributions, energy differences and energy gaps.« less
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center, capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
LCG MCDB—a knowledgebase of Monte-Carlo simulated events
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.
2008-02-01
In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC) generators to prepare the events. For example, the same MC samples of Standard Model (SM) processes can be employed for the investigations either in the SM analyses (as a signal) or in searches for new phenomena in Beyond Standard Model analyses (as a background). If the samples are made available publicly and equipped with corresponding and comprehensive documentation, it can speed up cross checks of the samples themselves and physical models applied. Some event samples require a lot of computing resources for preparation. So, a central storage of the samples prevents possible waste of researcher time and computing resources, which can be used to prepare the same events many times. Solution method: Creation of a special knowledgebase (MCDB) designed to keep event samples for the LHC experimental and phenomenological community. The knowledgebase is realized as a separate web-server ( http://mcdb.cern.ch). All event samples are kept on types at CERN. Documentation describing the events is the main contents of MCDB. Users can browse the knowledgebase, read and comment articles (documentation), and download event samples. Authors can upload new event samples, create new articles, and edit own articles. Restrictions: The software is adopted to solve the problems, described in the article and there are no any additional restrictions. Unusual features: The software provides a framework to store and document large files with flexible authentication and authorization system. Different external storages with large capacity can be used to keep the files. The WEB Content Management System provides all of the necessary interfaces for the authors of the files, end-users and administrators. Running time: Real time operations. References: [1] The main LCG MCDB server, http://mcdb.cern.ch/. [2] P. Bartalini, L. Dudko, A. Kryukov, I.V. Selyuzhenkov, A. Sherstnev, A. Vologdin, LCG Monte-Carlo data base, hep-ph/0404241. [3] J.P. Baud, B. Couturier, C. Curran, J.D. Durand, E. Knezo, S. Occhetti, O. Barring, CASTOR: status and evolution, cs.oh/0305047.
NASA Astrophysics Data System (ADS)
Gao, Yi
The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis
NASA Astrophysics Data System (ADS)
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.
2016-07-01
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Recent applications of THERMUS
NASA Astrophysics Data System (ADS)
Wheaton, S.; Hauer, M.
2011-12-01
Some of the most recent applications of the statistical-thermal model package, THERMUS, are reviewed. These applications focus on fluctuation and correlation observables in an ideal particle and anti-particle gas in limited momentum space segments, as well as in a hadron resonance gas. In the case of the latter, a Monte Carlo event generator, utilising THERMUS functionality and assuming thermal production of hadrons, is discussed. The system under consideration is sampled grand canonically in the Boltzmann approximation. A re-weighting scheme is then introduced to account for conservation of charges (baryon number, strangeness, electric charge) and energy and momentum, effectively allowing for extrapolation of grand canonical results to the micro canonical limit. The approach utilised in this and other applications suggests improvements to existing THERMUS calculations.
Non-proportional odds multivariate logistic regression of ordinal family data.
Zaloumis, Sophie G; Scurrah, Katrina J; Harrap, Stephen B; Ellis, Justine A; Gurrin, Lyle C
2015-03-01
Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non-proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non-proportional odds multivariate logistic regression model and take a simulation-based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Culpepper, Steven Andrew
2016-06-01
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisenbach, Markus; Li, Ying Wai
We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage ofmore » avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.« less
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH 3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state spacemore » for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Clark, Michael D; Morris, Kenneth R; Tomassone, Maria Silvina
2017-09-12
We present a novel simulation-based investigation of the nucleation of nanodroplets from solution and from vapor. Nucleation is difficult to measure or model accurately, and predicting when nucleation should occur remains an open problem. Of specific interest is the "metastable limit", the observed concentration at which nucleation occurs spontaneously, which cannot currently be estimated a priori. To investigate the nucleation process, we employ gauge-cell Monte Carlo simulations to target spontaneous nucleation and measure thermodynamic properties of the system at nucleation. Our results reveal a widespread correlation over 5 orders of magnitude of solubilities, in which the metastable limit depends exclusively on solubility and the number density of generated nuclei. This three-way correlation is independent of other parameters, including intermolecular interactions, temperature, molecular structure, system composition, and the structure of the formed nuclei. Our results have great potential to further the prediction of nucleation events using easily measurable solute properties alone and to open new doors for further investigation.
Calculating Potential Energy Curves with Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2014-06-01
Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
Quantum Monte Carlo for atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Driven-dissipative quantum Monte Carlo method for open quantum systems
NASA Astrophysics Data System (ADS)
Nagy, Alexandra; Savona, Vincenzo
2018-05-01
We develop a real-time full configuration-interaction quantum Monte Carlo approach to model driven-dissipative open quantum systems with Markovian system-bath coupling. The method enables stochastic sampling of the Liouville-von Neumann time evolution of the density matrix thanks to a massively parallel algorithm, thus providing estimates of observables on the nonequilibrium steady state. We present the underlying theory and introduce an initiator technique and importance sampling to reduce the statistical error. Finally, we demonstrate the efficiency of our approach by applying it to the driven-dissipative two-dimensional X Y Z spin-1/2 model on a lattice.
Novel Quantum Phases at Interfaces
2014-12-12
89.085122 Mehdi Kargarian, Gregory A. Fiete. Multiorbital effects on thermoelectric properties of strongly correlated materials , Physical Review B...Multi-orbital Effects on Thermoelectric Properties of Strongly Correlated Materials , ArXiv e-prints (08 2013) Joseph Maciejko, Victor Chua...Lei Wang , Gregory A. Fiete. Finite- size and interaction effects on topological phase transitions via numerically exact quantum Monte Carlo
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
NASA Astrophysics Data System (ADS)
Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.
2012-05-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.
NASA Astrophysics Data System (ADS)
Pond, Mark J.; Errington, Jeffrey R.; Truskett, Thomas M.
2011-09-01
Partial pair-correlation functions of colloidal suspensions with continuous polydispersity can be challenging to characterize from optical microscopy or computer simulation data due to inadequate sampling. As a result, it is common to adopt an effective one-component description of the structure that ignores the differences between particle types. Unfortunately, whether this kind of simplified description preserves or averages out information important for understanding the behavior of the fluid depends on the degree of polydispersity and can be difficult to assess, especially when the corresponding multicomponent description of the pair correlations is unavailable for comparison. Here, we present a computer simulation study that examines the implications of adopting an effective one-component structural description of a polydisperse fluid. The square-well model that we investigate mimics key aspects of the experimental behavior of suspended colloids with short-range, polymer-mediated attractions. To characterize the partial pair-correlation functions and thermodynamic excess entropy of this system, we introduce a Monte Carlo sampling strategy appropriate for fluids with a large number of pseudo-components. The data from our simulations at high particle concentrations, as well as exact theoretical results for dilute systems, show how qualitatively different trends between structural order and particle attractions emerge from the multicomponent and effective one-component treatments, even with systems characterized by moderate polydispersity. We examine consequences of these differences for excess-entropy based scalings of shear viscosity, and we discuss how use of the multicomponent treatment reveals similarities between the corresponding dynamic scaling behaviors of attractive colloids and liquid water that the effective one-component analysis does not capture.
NASA Technical Reports Server (NTRS)
Liu, J.; Tiwari, Surendra N.
1994-01-01
The two-dimensional spatially elliptic Navier-Stokes equations have been used to investigate the radiative interactions in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The radiative heat transfer term in the energy equation is simulated using the Monte Carlo method (MCM). The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The spectral correlation has been considered in the Monte Carlo formulations. Results obtained demonstrate that the effect of radiation on the flow field is minimal but its effect on the wall heat transfer is significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and the nozzle size on the radiative and conductive wall fluxes.
Kong, Dongdong; Wang, Yafei; Wang, Jinsheng; Teng, Yanguo; Li, Na; Li, Jian
2016-11-01
In this study, a recombinant thyroid receptor (TR) gene yeast assay combined with Monte Carlo simulation were used to evaluate and characterize soil samples collected from Jilin (China) along the Second Songhua River, for their ant/agonist effect on TR. No TR agonistic activity was found in soils, but many soil samples exhibited TR antagonistic activities, and the bioassay-derived amiodarone hydrochloride equivalents, which was calculated based on Monte Carlo simulation, ranged from not detected (N.D.) to 35.5μg/g. Hydrophilic substance fractions were determined to be the contributors to TR antagonistic activity in these soil samples. Our results indicate that the novel calculation method is effective for the quantification and characterization of TR antagonists in soil samples, and these data could provide useful information for future management and remediation efforts for contaminated soils. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Liebherr, James K.
2012-01-01
Abstract Seven species of Mecyclothorax Sharp precinctive to Mont Mauru, Tahiti, Society Islands are newly described: Mecyclothorax tutei sp. n., Mecyclothorax tihotii sp. n., Mecyclothorax putaputa sp. n., Mecyclothorax toretore sp. n., Mecyclothorax anaana sp. n., Mecyclothorax pirihao sp. n., and Mecyclothorax poro sp. n. These seven constitute the first representative Mecyclothorax species recorded from Mauru, and their geographic restriction to this isolated massif defines it as a distinct area of endemism along the highly dissected eastern versant of the Tahiti Nui volcano. Each of the new species has a closest relative on another massif of Tahiti Nui, supporting speciation associated with vicariance caused by extensive erosional valley formation, especially the development of Papenoo Valley. Comparison of the known elevational distributions of the new discoveries on Mont Mauru to the elevational diversity profile of the comparatively well-sampled Mont Marau, northwest Tahiti Nui, suggests that numerous Mecyclothorax species remain to be discovered in higher-elevation habitats of Mont Mauru. PMID:23166465
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Absolute Helmholtz free energy of highly anharmonic crystals: theory vs Monte Carlo.
Yakub, Lydia; Yakub, Eugene
2012-04-14
We discuss the problem of the quantitative theoretical prediction of the absolute free energy for classical highly anharmonic solids. Helmholtz free energy of the Lennard-Jones (LJ) crystal is calculated accurately while accounting for both the anharmonicity of atomic vibrations and the pair and triple correlations in displacements of the atoms from their lattice sites. The comparison with most precise computer simulation data on sublimation and melting lines revealed that theoretical predictions are in excellent agreement with Monte Carlo simulation data in the whole range of temperatures and densities studied.
NASA Astrophysics Data System (ADS)
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Heterogeneity in ultrathin films simulated by Monte Carlo method
NASA Astrophysics Data System (ADS)
Sun, Jiebing; Hannon, James B.; Kellogg, Gary L.; Pohl, Karsten
2007-03-01
The 3D composition profile of ultra-thin Pd films on Cu(001) has been experimentally determined using low energy electron microscopy (LEEM).^[1] Quantitative measurements of the alloy concentration profile near steps show that the Pd distribution in the 3^rd layer is heterogeneous due to step overgrowth during Pd deposition. Interestingly, the Pd distribution in the 2^nd layer is also heterogeneous, and appears to be correlated with the distribution in the 1^st layer. We describe Monte Carlo simulations that show that correlation is due to Cu-Pd attraction, and that the 2^nd layer Pd is, in fact, laterally equilibrated. By comparing measured and simulated concentration profiles, we can estimate this attraction within a simple bond counting model. [1] J. B. Hannon, J. Sun, K. Pohl, G. L. Kellogg, Phys. Rev. Lett. 96, 246103 (2006)
Percolation of binary disk systems: Modeling and theory
Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.
2017-01-12
The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Importance Sampling Variance Reduction in GRESS ATMOSIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wakeford, Daniel Tyler
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
Appraisal of jump distributions in ensemble-based sampling algorithms
NASA Astrophysics Data System (ADS)
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
Sakota, Daisuke; Kosaka, Ryo; Nishida, Masahiro; Maruyama, Osamu
2015-01-01
Turbidity variation is one of the major limitations in Raman spectroscopy for quantifying blood components, such as glucose, non-invasively. To overcome this limitation, we have developed a Raman scattering simulation using a photon-cell interactive Monte Carlo (pciMC) model that tracks photon migration in both the extra- and intracellular spaces without relying on the macroscopic scattering phase function and anisotropy factor. The interaction of photons at the plasma-cell boundary of randomly oriented three-dimensionally biconcave red blood cells (RBCs) is modeled using geometric optics. The validity of the developed pciMCRaman was investigated by comparing simulation and experimental results of Raman spectroscopy of glucose level in a bovine blood sample. The scattering of the excitation laser at a wavelength of 785 nm was simulated considering the changes in the refractive index of the extracellular solution. Based on the excitation laser photon distribution within the blood, the Raman photon derived from the hemoglobin and glucose molecule at the Raman shift of 1140 cm(-1) = 862 nm was generated, and the photons reaching the detection area were counted. The simulation and experimental results showed good correlation. It is speculated that pciMCRaman can provide information about the ability and limitations of the measurement of blood glucose level.
Numerically Exact Long Time Magnetization Dynamics Near the Nonequilibrium Kondo Regime
NASA Astrophysics Data System (ADS)
Cohen, Guy; Gull, Emanuel; Reichman, David; Millis, Andrew; Rabani, Eran
2013-03-01
The dynamical and steady-state spin response of the nonequilibrium Anderson impurity model to magnetic fields, bias voltages, and temperature is investigated by a numerically exact method which allows access to unprecedentedly long times. The method is based on using real, continuous time bold Monte Carlo techniques--quantum Monte Carlo sampling of diagrammatic corrections to a partial re-summation--in order to compute the kernel of a memory function, which is then used to determine the reduced density matrix. The method owes its effectiveness to the fact that the memory kernel is dominated by relatively short-time properties even when the system's dynamics are long-ranged. We make predictions regarding the non-monotonic temperature dependence of the system at high bias voltage and the oscillatory quench dynamics at high magnetic fields. We also discuss extensions of the method to the computation of transport properties and correlation functions, and its suitability as an impurity solver free from the need for analytical continuation in the context of dynamical mean field theory. This work is supported by the US Department of Energy under grant DE-SC0006613, by NSF-DMR-1006282 and by the US-Israel Binational Science Foundation. GC is grateful to the Yad Hanadiv-Rothschild Foundation for the award of a Rothschild Fellowship.
Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.
Neuwald, Andrew F; Altschul, Stephen F
2016-12-01
Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).
Akbulut, Songul; Grieken, Renevan; Kılıc, Mehmet A; Cevik, Ugur; Rotondo, Giuliana G
2013-03-01
Soils are complex mixtures of organic, inorganic materials, and metal compounds from anthropogenic sources. In order to identify the pollution sources, their magnitude and development, several X-ray analytical methods were applied in this study. The concentrations of 16 elements were determined in all the soil samples using energy dispersive X-ray fluorescence spectrometry. Soils of unknown origin were observed by scanning electron microscopy equipped with a Si(Li) X-ray detector using Monte Carlo simulation approach. The mineralogical analyses were carried out using X-ray diffraction spectrometry. Due to the correlations between heavy metals and oxide compounds, the samples were analyzed also by electron probe microanalyzer (EPMA) in order to have information about their oxide contents. On the other hand, soil pH and salinity levels were identified owing to their influence between heavy metal and soil-surface chemistry. Moreover, the geoaccumulation index (I (geo)) enables the assessment of contamination by comparing current and pre-industrial concentrations.
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Structure of Nano-sized CeO 2 Materials: Combined Scattering and Spectroscopic Investigations
Marchbank, Huw R.; Clark, Adam H.; Hyde, Timothy I.; ...
2016-08-29
Here, the nature of nano-sized ceria, CeO 2, systems were investigated using neutron and X-ray diffraction and X-ray absorption spectroscopy. Whilst both diffraction andtotal pair distribution functions (PDFs) revealed that in all the samples the occupancy of both Ce 4+ and O 2- are very close to the ideal stoichiometry, the analysis using reverse Monte Carlo technique revealedsignificant disorder around oxygen atoms in the nano sized ceria samples in comparison to the highly crystalline NIST standard.In addition, the analysis reveal that the main differences observed in the pair correlations from various X-ray and neutron diffraction techniques were attributed to themore » particle size of the CeO 2 prepared by the reported three methods. Furthermore, detailed analysis of the Ce L 3– and K-edge EXAFS data support this finding; in particular the decrease in higher shell coordination numbers with respect to the NIST standard, are attributed to differences in particle size.« less
Link, William A; Barker, Richard J
2005-03-01
We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
NASA Astrophysics Data System (ADS)
Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.
2017-11-01
Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Constraints on Biogenic Emplacement of Crystalline Calcium Carbonate and Dolomite
NASA Astrophysics Data System (ADS)
Colas, B.; Clark, S. M.; Jacob, D. E.
2015-12-01
Amorphous calcium carbonate (ACC) is a biogenic precursor of calcium carbonates forming shells and skeletons of marine organisms, which are key components of the whole marine environment. Understanding carbonate formation is an essential prerequisite to quantify the effect climate change and pollution have on marine population. Water is a critical component of the structure of ACC and the key component controlling the stability of the amorphous state. Addition of small amounts of magnesium (1-5% of the calcium content) is known to promote the stability of ACC presumably through stabilization of the hydrogen bonding network. Understanding the hydrogen bonding network in ACC is fundamental to understand the stability of ACC. Our approach is to use Monte-Carlo simulations constrained by X-ray and neutron scattering data to determine hydrogen bonding networks in ACC as a function of magnesium doping. We have already successfully developed a synthesis protocol to make ACC, and have collected X-ray data, which is suitable for determining Ca, Mg and O correlations, and have collected neutron data, which gives information on the hydrogen/deuterium (as the interaction of X-rays with hydrogen is too low for us to be able to constrain hydrogen atom positions with only X-rays). The X-ray and neutron data are used to constrain reverse Monte-Carlo modelling of the ACC structure using the Empirical Potential Structure Refinement program, in order to yield a complete structural model for ACC including water molecule positions. We will present details of our sample synthesis and characterization methods, X-ray and neutron scattering data, and reverse Monte-Carlo simulations results, together with a discussion of the role of hydrogen bonding in ACC stability.
The Correlated Jacobi and the Correlated Cauchy-Lorentz Ensembles
NASA Astrophysics Data System (ADS)
Wirtz, Tim; Waltner, Daniel; Kieburg, Mario; Kumar, Santosh
2016-01-01
We calculate the k-point generating function of the correlated Jacobi ensemble using supersymmetric methods. We use the result for complex matrices for k=1 to derive a closed-form expression for the eigenvalue density. For real matrices we obtain the density in terms of a twofold integral that we evaluate numerically. For both expressions we find agreement when comparing with Monte Carlo simulations. Relations between these quantities for the Jacobi and the Cauchy-Lorentz ensemble are derived.
Local normalization: Uncovering correlations in non-stationary financial time series
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Guhr, Thomas
2010-09-01
The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.
Langer, Álvaro I; Ulloa, Valentina G; Aguilar-Parra, José M; Araya-Véliz, Claudio; Brito, Gonzalo
2016-03-31
Recent studies have associated positive emotions with several variables such as learning, coping strategies or assertive behaviour. The concept of gratitude has been specifically defined as a tendency to recognise and respond to people or situations with grateful emotion. Unfortunately in Latin America, no validated measures of gratitude on different populations are available. The aim of this study was to analyse the psychometric properties of the Gratitude Questionnaire (GQ-6) in two Chilean samples. Two studies were conducted: the first with 668 high school adolescents (390 women and 278 men, with ages ranging between 12 and 20, and a mean age 15.54 ± 1.22) and the second with 331 adults (231 women and 100 men, with an average age of 37.59 ± 12.6). An analysis of the psychometric properties of the GQ-6 scale to determine the validity and reliability of the instrument in Chilean adolescents and adults was performed. Bivariate correlations, multiple regression analyses, exploratory factor analysis (EFA) and Monte Carlo simulations were carried out. Finally, a confirmatory factor analysis (CFA) was performed. A single-factor solution was found in both studies, a 5 item version for the adolescents and 6 items for adults. This factorial solution was invariant across genders. Reliability of the GQ was adequate in both samples (using Cronbach's alpha coefficient). In addition convergent and discriminate validity were assessed. Additionally, a negative correlation between the GQ-5 and depression in adolescents and a positive correlation between the GQ-6 and happiness in adults was found. The GQ is a suitable measure for evaluating a person's disposition toward gratitude in Chilean adolescents and adults. This instrument may contribute to the advancement of the study of positive emotions in Latin America.
NASA Astrophysics Data System (ADS)
Li, Tianjun; Maxin, James A.; Nanopoulos, Dimitri V.; Walker, Joel W.
2012-02-01
We suggest that non-trivial correlations between the dark matter particle mass and collider based probes of missing transverse energy H_{text{T}}^{text{miss}} may facilitate a two tiered approach to the initial discovery of supersymmetry and the subsequent reconstruction of the lightest supersymmetric particle (LSP) mass at the LHC. These correlations are demonstrated via extensive Monte Carlo simulation of seventeen benchmark models, each sampled at five distinct LHC center-of-mass beam energies, spanning the parameter space of No-Scale mathcal{F} -SU(5). This construction is defined in turn by the union of the mathcal{F} -lipped SU(5) Grand Unified Theory, two pairs of hypothetical TeV scale vector-like supersymmetric multiplets with origins in mathcal{F} -theory, and the dynamically established boundary conditions of No-Scale Supergravity. In addition, we consider a control sample comprised of a standard minimal Supergravity benchmark point. Led by a striking similarity between the H_{text{T}}^{text{miss}} distribution and the familiar power spectrum of a black body radiator at various temperatures, we implement a broad empirical fit of our simulation against a Poisson distribution ansätz. We advance the resulting fit as a theoretical blueprint for deducing the mass of the LSP, utilizing only the missing transverse energy in a statistical sampling of ≥ 9 jet events. Cumulative uncertainties central to the method subsist at a satisfactory 12-15% level. The fact that supersymmetric particle spectrum of No-Scale mathcal{F} -SU(5) has thrived the withering onslaught of early LHC data that is steadily decimating the Constrained Minimal Supersymmetric Standard Model and minimal Supergravity parameter spaces is a prime motivation for augmenting more conventional LSP search methodologies with the presently proposed alternative.
Lynn, J. E.; Tews, I.; Carlson, J.; ...
2017-11-30
Local chiral effective field theory interactions have recently been developed and used in the context of quantum Monte Carlo few- and many-body methods for nuclear physics. In this paper, we go over detailed features of local chiral nucleon-nucleon interactions and examine their effect on properties of the deuteron, paying special attention to the perturbativeness of the expansion. We then turn to three-nucleon interactions, focusing on operator ambiguities and their interplay with regulator effects. We then discuss the nuclear Green's function Monte Carlo method, going over both wave-function correlations and approximations for the two- and three-body propagators. Finally, following this, wemore » present a range of results on light nuclei: Binding energies and distribution functions are contrasted and compared, starting from several different microscopic interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2
NASA Astrophysics Data System (ADS)
Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.
2018-05-01
We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overy, Catherine; Blunt, N. S.; Shepherd, James J.
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamicmore » itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.« less
Using the Markov chain Monte Carlo method to study the physical properties of GeV-TeV BL Lac objects
NASA Astrophysics Data System (ADS)
Qin, Longhua; Wang, Jiancheng; Yang, Chuyuan; Yuan, Zunli; Mao, Jirong; Kang, Shiju
2018-01-01
We fit the spectral energy distributions (SEDs) of 46 GeV-TeV BL Lac objects in the frame of leptonic one-zone synchrotron self-Compton (SSC) model and investigate the physical properties of these objects. We use the Markov chain Monte Carlo (MCMC) method to obtain the basic parameters, such as magnetic field (B), the break energy of the relativistic electron distribution (γ ^' }b), and the electron energy spectral index. Based on the modeling results, we support the following scenarios for GeV-TeV BL Lac objects. (1) Some sources have large Doppler factors, implying other radiation mechanism should be considered. (2) Compared with flat spectrum quasars (FSRQs), GeV-TeV BL Lac objects have weaker magnetic fields and larger Doppler factors, which cause the ineffective cooling and shift the SEDs to higher bands. Their jet powers are around 4.0 × 1045 erg s-1, compared with radiation power, 5.0 × 1042 erg s-1, indicating that only a small fraction of jet power is transformed into the emission power. (3) For some BL Lacs with large Doppler factors, their jet components could have two substructures, e.g., the fast core and the slow sheath. For most GeV-TeV BL Lacs, Kelvin-Helmholtz instabilities are suppressed by their higher magnetic fields, leading to micro-variability or intro-day variability in the optical bands. (4) Combined with a sample of FSRQs, an anti-correlation between the peak luminosity, Lpk, and the peak frequency, νpk, is obtained, favoring the blazar sequence scenario. In addition, an anti-correlation between the jet power, Pjet, and the break Lorentz factor, γb, also supports the blazar sequence.
Computational Studies of Strongly Correlated Quantum Matter
NASA Astrophysics Data System (ADS)
Shi, Hao
The study of strongly correlated quantum many-body systems is an outstanding challenge. Highly accurate results are needed for the understanding of practical and fundamental problems in condensed-matter physics, high energy physics, material science, quantum chemistry and so on. Our familiar mean-field or perturbative methods tend to be ineffective. Numerical simulations provide a promising approach for studying such systems. The fundamental difficulty of numerical simulation is that the dimension of the Hilbert space needed to describe interacting systems increases exponentially with the system size. Quantum Monte Carlo (QMC) methods are one of the best approaches to tackle the problem of enormous Hilbert space. They have been highly successful for boson systems and unfrustrated spin models. For systems with fermions, the exchange symmetry in general causes the infamous sign problem, making the statistical noise in the computed results grow exponentially with the system size. This hinders our understanding of interesting physics such as high-temperature superconductivity, metal-insulator phase transition. In this thesis, we present a variety of new developments in the auxiliary-field quantum Monte Carlo (AFQMC) methods, including the incorporation of symmetry in both the trial wave function and the projector, developing the constraint release method, using the force-bias to drastically improve the efficiency in Metropolis framework, identifying and solving the infinite variance problem, and sampling Hartree-Fock-Bogoliubov wave function. With these developments, some of the most challenging many-electron problems are now under control. We obtain an exact numerical solution of two-dimensional strongly interacting Fermi atomic gas, determine the ground state properties of the 2D Fermi gas with Rashba spin-orbit coupling, provide benchmark results for the ground state of the two-dimensional Hubbard model, and establish that the Hubbard model has a stripe order in the underdoped region.
NASA Astrophysics Data System (ADS)
Tarim, Urkiye Akar; Ozmutlu, Emin N.; Yalcin, Sezai; Gundogdu, Ozcan; Bradley, D. A.; Gurler, Orhan
2017-11-01
A Monte Carlo method was developed to investigate radiation shielding properties of bismuth borate glass. The mass attenuation coefficients and half-value layer parameters were determined for different fractional amounts of Bi2O3 in the glass samples for the 356, 662, 1173 and 1332 keV photon energies. A comparison of the theoretical and experimental attenuation coefficients is presented.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, J. Jr.
1977-12-01
Four H/sub 2/O-moderated, slightly-enriched-uranium critical experiments were analyzed by Monte Carlo methods with ENDF/B-IV data. These were simple metal-rod lattices comprising Cross Section Evaluation Working Group thermal reactor benchmarks TRX-1 through TRX-4. Generally good agreement with experiment was obtained for calculated integral parameters: the epi-thermal/thermal ratio of U238 capture (rho/sup 28/) and of U235 fission (delta/sup 25/), the ratio of U238 capture to U235 fission (CR*), and the ratio of U238 fission to U235 fission (delta/sup 28/). Full-core Monte Carlo calculations for two lattices showed good agreement with cell Monte Carlo-plus-multigroup P/sub l/ leakage corrections. Newly measured parameters for themore » low energy resonances of U238 significantly improved rho/sup 28/. In comparison with other CSEWG analyses, the strong correlation between K/sub eff/ and rho/sup 28/ suggests that U238 resonance capture is the major problem encountered in analyzing these lattices.« less
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach
NASA Astrophysics Data System (ADS)
Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne
We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.
Monte Carlo analysis of tagged neutron beams for cargo container inspection.
Pesente, S; Lunardon, M; Nebbia, G; Viesti, G; Sudac, D; Valkovic, V
2007-12-01
Fast neutrons produced via D+T reactions and tagged by the associated particle technique have been recently proposed to inspect cargo containers. The general characteristics of this technique are studied with Monte Carlo simulations by determining the properties of the tagged neutron beams as a function of the relevant design parameters (energy and size of the deuteron beam, geometry of the charged particle detector). Results from simulations, validated by experiments, show that the broadening of the correlation between the alpha-particle and the neutron, induced by kinematical as well as geometrical (beam and detector size) effects, is important and limits the dimension of the minimum voxel to be inspected. Moreover, the effect of the container filling is explored. The material filling produces a sizeable loss of correlation between alpha-particles and neutrons due to scattering and absorption. Conditions in inspecting cargo containers are discussed.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
Monte Carlo tests of the ELIPGRID-PC algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altsybeev, Igor
2016-01-22
In the present work, Monte-Carlo toy model with repulsing quark-gluon strings in hadron-hadron collisions is described. String repulsion creates transverse boosts for the string decay products, giving modifications of observables. As an example, long-range correlations between mean transverse momenta of particles in two observation windows are studied in MC toy simulation of the heavy-ion collisions.
Communication: Water on hexagonal boron nitride from diffusion Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos, E-mail: angelos.michaelides@ucl.ac.uk
2015-05-14
Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter A.; Stewart, Gordon; Lackner, Matthew
Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, whichmore » is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.« less
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT<>0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
TASEP of interacting particles of arbitrary size
NASA Astrophysics Data System (ADS)
Narasimhan, S. L.; Baumgaertner, A.
2017-10-01
A mean-field description of the stationary state behaviour of interacting k-mers performing totally asymmetric exclusion processes (TASEP) on an open lattice segment is presented employing the discrete Takahashi formalism. It is shown how the maximal current and the phase diagram, including triple-points, depend on the strength of repulsive and attractive interactions. We compare the mean-field results with Monte Carlo simulation of three types interacting k-mers: monomers, dimers and trimers. (a) We find that the Takahashi estimates of the maximal current agree quantitatively with those of the Monte Carlo simulation in the absence of interaction as well as in both the the attractive and the strongly repulsive regimes. However, theory and Monte Carlo results disagree in the range of weak repulsion, where the Takahashi estimates of the maximal current show a monotonic behaviour, whereas the Monte Carlo data show a peaking behaviour. It is argued that the peaking of the maximal current is due to a correlated motion of the particles. In the limit of very strong repulsion the theory predicts a universal behavior: th maximal currents of k-mers correspond to that of non-interacting (k+1) -mers; (b) Monte Carlo estimates of the triple-points for monomers, dimers and trimers show an interesting general behaviour : (i) the phase boundaries α * and β* for entry and exit current, respectively, as function of interaction strengths show maxima for α* whereas β * exhibit minima at the same strength; (ii) in the attractive regime, however, the trend is reversed (β * > α * ). The Takahashi estimates of the triple-point for monomers show a similar trend as the Monte Carlo data except for the peaking of α * ; for dimers and trimers, however, the Takahashi estimates show an opposite trend as compared to the Monte Carlo data.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
ON THE CLUSTERING OF SUBMILLIMETER GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Christina C.; Giavalisco, Mauro; Yun, Min S.
2011-06-01
We measure the angular two-point correlation function of submillimeter galaxies (SMGs) from 1.1 mm imaging of the COSMOS field with the AzTEC camera and ASTE 10 m telescope. These data yield one of the largest contiguous samples of SMGs to date, covering an area of 0.72 deg{sup 2} down to a 1.26 mJy beam{sup -1} (1{sigma}) limit, including 189 (328) sources with S/N {>=}3.5 (3). We can only set upper limits to the correlation length r{sub 0}, modeling the correlation function as a power law with pre-assigned slope. Assuming existing redshift distributions, we derive 68.3% confidence level upper limits ofmore » r{sub 0} {approx}< 6-8h{sup -1} Mpc at 3.7 mJy and r{sub 0} {approx}< 11-12 h{sup -1} Mpc at 4.2 mJy. Although consistent with most previous estimates, these upper limits imply that the real r{sub 0} is likely smaller. This casts doubts on the robustness of claims that SMGs are characterized by significantly stronger spatial clustering (and thus larger mass) than differently selected galaxies at high redshift. Using Monte Carlo simulations we show that even strongly clustered distributions of galaxies can appear unclustered when sampled with limited sensitivity and coarse angular resolution common to current submillimeter surveys. The simulations, however, also show that unclustered distributions can appear strongly clustered under these circumstances. From the simulations, we predict that at our survey depth, a mapped area of 2 deg{sup 2} is needed to reconstruct the correlation function, assuming smaller beam sizes of future surveys (e.g., the Large Millimeter Telescope's 6'' beam size). At present, robust measures of the clustering strength of bright SMGs appear to be below the reach of most observations.« less
NASA Astrophysics Data System (ADS)
Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.
2015-08-01
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S
2015-08-21
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
NASA Astrophysics Data System (ADS)
Frank, Stefan; Rikvold, Per Arne
2006-06-01
The influence of lateral adsorbate diffusion on the dynamics of the first-order phase transition in a two-dimensional Ising lattice gas with attractive nearest-neighbor interactions is investigated by means of kinetic Monte Carlo simulations. For example, electrochemical underpotential deposition proceeds by this mechanism. One major difference from adsorption in vacuum surface science is that under control of the electrode potential and in the absence of mass-transport limitations, local adsorption equilibrium is approximately established. We analyze our results using the theory of Kolmogorov, Johnson and Mehl, and Avrami (KJMA), which we extend to an exponentially decaying nucleation rate. Such a decay may occur due to a suppression of nucleation around existing clusters in the presence of lateral adsorbate diffusion. Correlation functions prove the existence of such exclusion zones. By comparison with microscopic results for the nucleation rate I and the interface velocity of the growing clusters v, we can show that the KJMA theory yields the correct order of magnitude for Iv2. This is true even though the spatial correlations mediated by diffusion are neglected. The decaying nucleation rate causes a gradual crossover from continuous to instantaneous nucleation, which is complete when the decay of the nucleation rate is very fast on the time scale of the phase transformation. Hence, instantaneous nucleation can be homogeneous, producing negative minima in the two-point correlation functions. We also present in this paper an n-fold way Monte Carlo algorithm for a square lattice gas with adsorption/desorption and lateral diffusion.
Nonlinear Spatial Inversion Without Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Path-integral Monte Carlo method for Rényi entanglement entropies.
Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A
2014-07-01
We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.
Tringe, J. W.; Ileri, N.; Levie, H. W.; ...
2015-08-01
We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less
Haghighi Mood, Kaveh; Lüchow, Arne
2017-08-17
Diffusion quantum Monte Carlo calculations with partial and full optimization of the guide function are carried out for the dissociation of the FeS molecule. For the first time, quantum Monte Carlo orbital optimization for transition metal compounds is performed. It is demonstrated that energy optimization of the orbitals of a complete active space wave function in the presence of a Jastrow correlation function is required to obtain agreement with the experimental dissociation energy. Furthermore, it is shown that orbital optimization leads to a 5 Δ ground state, in agreement with experiments but in disagreement with other high-level ab initio wave function calculations which all predict a 5 Σ + ground state. The role of the Jastrow factor in DMC calculations with pseudopotentials is investigated. The results suggest that a large Jastrow factor may improve the DMC accuracy substantially at small additional cost.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.
Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N
1987-01-01
Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
Data Analysis Recipes: Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Hogg, David W.; Foreman-Mackey, Daniel
2018-05-01
Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .
Hocine, Nora; Meignan, Michel; Masset, Hélène
2018-04-01
To better understand the risks of cumulative medical X-ray investigations and the possible causal role of contrast agent on the coronary artery wall, the correlation between iodinated contrast media and the increase of energy deposited in the coronary artery lumen as a function of iodine concentration and photon energy is investigated. The calculations of energy deposition have been performed using Monte Carlo (MC) simulation codes, namely PENetration and Energy LOss of Positrons and Electrons (PENELOPE) and Monte Carlo N-Particle eXtended (MCNPX). Exposure of a cylinder phantom, artery and a metal stent (AISI 316L) to several X-ray photon beams were simulated. For the energies used in cardiac imaging the energy deposited in the coronary artery lumen increases with the quantity of iodine. Monte Carlo calculations indicate a strong dependence of the energy enhancement factor (EEF) on photon energy and iodine concentration. The maximum value of EEF is equal to 25; this factor is showed for 83 keV and for 400 mg Iodine/mL. No significant impact of the stent is observed on the absorbed dose in the artery for incident X-ray beams with mean energies of 44, 48, 52 and 55 keV. A strong correlation was shown between the increase in the concentration of iodine and the energy deposited in the coronary artery lumen for the energies used in cardiac imaging and over the energy range between 44 and 55 keV. The data provided by this study could be useful for creating new medical imaging protocols to obtain better diagnostic information with a lower level of radiation exposure.
Granato, Enzo
2008-07-11
Phase coherence and vortex order in a Josephson-junction array at irrational frustration are studied by extensive Monte Carlo simulations using the parallel-tempering method. A scaling analysis of the correlation length of phase variables in the full equilibrated system shows that the critical temperature vanishes with a power-law divergent correlation length and critical exponent nuph, in agreement with recent results from resistivity scaling analysis. A similar scaling analysis for vortex variables reveals a different critical exponent nuv, suggesting that there are two distinct correlation lengths associated with a decoupled zero-temperature phase transition.
Gluonic hot spots and spatial correlations inside the proton
NASA Astrophysics Data System (ADS)
Albacete, Javier L.; Petersen, Hannah; Soto-Ontoso, Alba
2017-11-01
In this work, largely based on [J. L. Albacete, A. Soto-Ontoso, Hot spots and the hollowness of proton-proton interactions at high energies, arXiv:1605.09176; J. L. Albacete, H. Petersen, A. Soto-Ontoso, Correlated wounded hot spots in proton-proton interactions, arXiv:1612.06274], we present a novel initial state geometry for proton-proton interactions. We rely on gluonic hot spots as effective degrees of freedom whose transverse positions inside the proton are correlated. We explore the impact of these non-trivial spatial correlations on the eccentricity and triangularity of the system following a Monte Carlo Glauber approach.
ERIC Educational Resources Information Center
Hsu, Hsien-Yuan; Lin, Jr-Hung; Kwok, Oi-Man; Acosta, Sandra; Willson, Victor
2017-01-01
Several researchers have recommended that level-specific fit indices should be applied to detect the lack of model fit at any level in multilevel structural equation models. Although we concur with their view, we note that these studies did not sufficiently consider the impact of intraclass correlation (ICC) on the performance of level-specific…
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
NASA Astrophysics Data System (ADS)
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
Constant-pressure nested sampling with atomistic dynamics
NASA Astrophysics Data System (ADS)
Baldock, Robert J. N.; Bernstein, Noam; Salerno, K. Michael; Pártay, Lívia B.; Csányi, Gábor
2017-10-01
The nested sampling algorithm has been shown to be a general method for calculating the pressure-temperature-composition phase diagrams of materials. While the previous implementation used single-particle Monte Carlo moves, these are inefficient for condensed systems with general interactions where single-particle moves cannot be evaluated faster than the energy of the whole system. Here we enhance the method by using all-particle moves: either Galilean Monte Carlo or the total enthalpy Hamiltonian Monte Carlo algorithm, introduced in this paper. We show that these algorithms enable the determination of phase transition temperatures with equivalent accuracy to the previous method at 1 /N of the cost for an N -particle system with general interactions, or at equal cost when single-particle moves can be done in 1 /N of the cost of a full N -particle energy evaluation. We demonstrate this speed-up for the freezing and condensation transitions of the Lennard-Jones system and show the utility of the algorithms by calculating the order-disorder phase transition of a binary Lennard-Jones model alloy, the eutectic of copper-gold, the density anomaly of water, and the condensation and solidification of bead-spring polymers. The nested sampling method with all three algorithms is implemented in the pymatnest software.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
Zhang, Chu; Liu, Fei; Kong, Wenwen; He, Yong
2015-01-01
Visible and near-infrared hyperspectral imaging covering spectral range of 380–1030 nm as a rapid and non-destructive method was applied to estimate the soluble protein content of oilseed rape leaves. Average spectrum (500–900 nm) of the region of interest (ROI) of each sample was extracted, and four samples out of 128 samples were defined as outliers by Monte Carlo-partial least squares (MCPLS). Partial least squares (PLS) model using full spectra obtained dependable performance with the correlation coefficient (rp) of 0.9441, root mean square error of prediction (RMSEP) of 0.1658 mg/g and residual prediction deviation (RPD) of 2.98. The weighted regression coefficient (Bw), successive projections algorithm (SPA) and genetic algorithm-partial least squares (GAPLS) selected 18, 15, and 16 sensitive wavelengths, respectively. SPA-PLS model obtained the best performance with rp of 0.9554, RMSEP of 0.1538 mg/g and RPD of 3.25. Distribution of protein content within the rape leaves were visualized and mapped on the basis of the SPA-PLS model. The overall results indicated that hyperspectral imaging could be used to determine and visualize the soluble protein content of rape leaves. PMID:26184198
Gavett, Brandon E
2015-03-01
The base rates of abnormal test scores in cognitively normal samples have been a focus of recent research. The goal of the current study is to illustrate how Bayes' theorem uses these base rates--along with the same base rates in cognitively impaired samples and prevalence rates of cognitive impairment--to yield probability values that are more useful for making judgments about the absence or presence of cognitive impairment. Correlation matrices, means, and standard deviations were obtained from the Wechsler Memory Scale--4th Edition (WMS-IV) Technical and Interpretive Manual and used in Monte Carlo simulations to estimate the base rates of abnormal test scores in the standardization and special groups (mixed clinical) samples. Bayes' theorem was applied to these estimates to identify probabilities of normal cognition based on the number of abnormal test scores observed. Abnormal scores were common in the standardization sample (65.4% scoring below a scaled score of 7 on at least one subtest) and more common in the mixed clinical sample (85.6% scoring below a scaled score of 7 on at least one subtest). Probabilities varied according to the number of abnormal test scores, base rates of normal cognition, and cutoff scores. The results suggest that interpretation of base rates obtained from cognitively healthy samples must also account for data from cognitively impaired samples. Bayes' theorem can help neuropsychologists answer questions about the probability that an individual examinee is cognitively healthy based on the number of abnormal test scores observed.
Xu, Dong; Liu, Sitong; Chen, Qian; Ni, Jinren
2017-12-01
The microbial community diversity in anaerobic-, anoxic- and oxic-biological zones of a conventional Carrousel oxidation ditch system for domestic wastewater treatment was systematically investigated. The monitored results of the activated sludge sampled from six full-scale WWTPs indicated that Proteobacteria, Chloroflexi, Bacteroidetes, Actinobacteria, Verrucomicrobia, Acidobacteria and Nitrospirae were dominant phyla, and Nitrospira was the most abundant and ubiquitous genus across the three biological zones. The anaerobic-, anoxic- and oxic-zones shared approximately similar percentages across the 50 most abundant genera, and three genera (i.e. uncultured bacterium PeM15, Methanosaeta and Bellilinea) presented statistically significantly differential abundance in the anoxic-zone. Illumina high-throughput sequences related to ammonium oxidizer organisms and denitrifiers with top50 abundance in all samples were Nitrospira, uncultured Nitrosomonadaceae, Dechloromonas, Thauera, Denitratisoma, Rhodocyclaceae (norank) and Comamonadaceae (norank). Moreover, environmental variables such as water temperature, water volume, influent ammonium nitrogen, influent chemical oxygen demand (COD) and effluent COD exhibited significant correlation to the microbial community according to the Monte Carlo permutation test analysis (p < 0.05). The abundance of Nitrospira, uncultured Nitrosomonadaceae and Denitratisoma presented strong positive correlations with the influent/effluent concentration of COD and ammonium nitrogen, while Dechloromonas, Thauera, Rhodocyclaceae (norank) and Comamonadaceae (norank) showed positive correlations with water volume and temperature. The established relationship between microbial community and environmental variables in different biologically functional zones of the six representative WWTPs at different geographical locations made the present work of potential use for evaluation of practical wastewater treatment processes.
Cohen-Wolkowiez, Michael; Sampson, Mario; Bloom, Barry T; Arrieta, Antonio; Wynn, James L; Martz, Karen; Harper, Barrie; Kearns, Gregory L; Capparelli, Edmund V; Siegel, David; Benjamin, Daniel K; Smith, P Brian
2013-09-01
Limited pharmacokinetic (PK) data of metronidazole in premature infants have led to various dosing recommendations. Surrogate efficacy targets for metronidazole are ill-defined and therefore aimed to exceed minimum inhibitory concentration of organisms responsible for intra-abdominal infections. We evaluated the PK of metronidazole using plasma and dried blood spot samples from infants ≤32 weeks gestational age in an open-label, PK, multicenter (N = 3) study using population PK modeling (NONMEM). Monte Carlo simulations (N = 1000 virtual subjects) were used to evaluate the surrogate efficacy target. Metabolic ratios of parent and metabolite were calculated. Twenty-four premature infants (111 plasma and 51 dried blood spot samples) were enrolled: median (range) gestational age at birth 25 (23-31) weeks, postnatal age 27 (1-82) days, postmenstrual age 31 (24-39) weeks and weight 740 (431-1466) g. Population clearance (L/h/kg) was 0.038 × (postmenstrual age/30) and volume of distribution (L/kg) of 0.93. PK parameter estimates and precision were similar between plasma and dried blood spot samples. Metabolic ratios correlated with clearance. Simulations suggested the majority of infants in the neonatal intensive care unit (>80%) would meet the surrogate efficacy target using postmenstrual age-based dosing.
Cohen-Wolkowiez, Michael; Sampson, Mario; Bloom, Barry T.; Arrieta, Antonio; Wynn, James L.; Martz, Karen; Harper, Barrie; Kearns, Gregory L.; Capparelli, Edmund V.; Siegel, David; Benjamin, Daniel K.; Smith, P. Brian
2013-01-01
Background Limited pharmacokinetic (PK) data of metronidazole in premature infants has led to various dosing recommendations. Surrogate efficacy targets for metronidazole are ill-defined and therefore aimed to exceed minimum inhibitory concentration of organisms responsible for intra-abdominal infections. Methods We evaluated the PK of metronidazole using plasma and dried blood spot (DBS) samples from infants ≤32 weeks gestational age in an open-label, PK, multicenter (N=3) study using population PK modeling (NONMEM). Monte Carlo simulations (N=1000 virtual subjects) were used to evaluate the surrogate efficacy target. Metabolic ratios of parent and metabolite were calculated. Results Twenty-four premature infants (111 plasma and 51 DBS samples) were enrolled: median (range) gestational age at birth 25 (23–31) weeks, postnatal age 27 (1–82) days, postmenstrual age (PMA) 31 (24–39) weeks, and weight 740 (431–1466) g. Population clearance (CL, L/h/kg) was 0.038 × (PMA/30)2.45 and volume of distribution (L/kg) of 0.93. PK parameter estimates and precision were similar between plasma and DBS samples. Metabolic ratios correlated with CL. Conclusion Simulations suggested the majority of infants in the neonatal intensive care unit (>80%) would meet the surrogate efficacy target using PMA-based dosing. PMID:23587979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
NASA Astrophysics Data System (ADS)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module
NASA Astrophysics Data System (ADS)
Martinez, Gregory D.; McKay, James; Farmer, Ben; Scott, Pat; Roebber, Elinore; Putze, Antje; Conrad, Jan
2017-11-01
We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
Adapting phase-switch Monte Carlo method for flexible organic molecules
NASA Astrophysics Data System (ADS)
Bridgwater, Sally; Quigley, David
2014-03-01
The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William
2017-09-01
Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.
Stage scoring of liver fibrosis using Mueller matrix microscope
NASA Astrophysics Data System (ADS)
Zhou, Jialing; He, Honghui; Wang, Ye; Ma, Hui
2016-10-01
Liver fibrosis is a common pathological process of varied chronic liver diseases including alcoholic hepatitis, virus hepatitis, and so on. Accurate evaluation of liver fibrosis is necessary for effective therapy and a five-stage grading system was developed. Currently, experienced pathologists use stained liver biopsies to assess the degree of liver fibrosis. But it is difficult to obtain highly reproducible results because of huge discrepancy among different observers. Polarization imaging technique has the potential of scoring liver fibrosis since it is capable of probing the structural and optical properties of samples. Considering that the Mueller matrix measurement can provide comprehensive microstructural information of the tissues, in this paper, we apply the Mueller matrix microscope to human liver fibrosis slices in different fibrosis stages. We extract the valid regions and adopt the Mueller matrix polar decomposition (MMPD) and Mueller matrix transformation (MMT) parameters for quantitative analysis. We also use the Monte Carlo simulation to analyze the relationship between the microscopic Mueller matrix parameters and the characteristic structural changes during the fibrosis process. The experimental and Monte Carlo simulated results show good consistency. We get a positive correlation between the parameters and the stage of liver fibrosis. The results presented in this paper indicate that the Mueller matrix microscope can provide additional information for the detections and fibrosis scorings of liver tissues and has great potential in liver fibrosis diagnosis.
Overdensities of SMGs around WISE-selected, ultraluminous, high-redshift AGNs
NASA Astrophysics Data System (ADS)
Jones, Suzy F.; Blain, Andrew W.; Assef, Roberto J.; Eisenhardt, Peter; Lonsdale, Carol; Condon, James; Farrah, Duncan; Tsai, Chao-Wei; Bridge, Carrie; Wu, Jingwen; Wright, Edward L.; Jarrett, Tom
2017-08-01
We investigate extremely luminous dusty galaxies in the environments around Wide-field Infrared Survey Explorer (WISE)-selected hot dust-obscured galaxies (Hot DOGs) and WISE/radio-selected active galactic nuclei (AGNs) at average redshifts of z = 2.7 and 1.7, respectively. Previous observations have detected overdensities of companion submillimetre-selected sources around 10 Hot DOGs and 30 WISE/radio AGNs, with overdensities of ˜2-3 and ˜5-6, respectively. We find that the space densities in both samples to be overdense compared to normal star-forming galaxies and submillimetre galaxies (SMGs) in the Submillimetre Common-User Bolometer Array 2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). Both samples of companion sources have consistent mid-infrared (mid-IR) colours and mid-IR to submm ratios as SMGs. The brighter population around WISE/radio AGNs could be responsible for the higher overdensity reported. We also find that the star formation rate densities are higher than the field, but consistent with clusters of dusty galaxies. WISE-selected AGNs appear to be good signposts for protoclusters at high redshift on arcmin scales. The results reported here provide an upper limit to the strength of angular clustering using the two-point correlation function. Monte Carlo simulations show no angular correlation, which could indicate protoclusters on scales larger than the SCUBA-2 1.5-arcmin scale maps.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badal, A; Zbijewski, W; Bolch, W
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods,more » are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all necessary data on material composition, source, geometry, scoring and other parameters provided. The results of these simulations when performed with the four most common publicly available Monte Carlo packages are also provided in tabular form. The Task Group 195 Report will be useful for researchers needing to validate their Monte Carlo work, and for trainees needing to learn Monte Carlo simulation methods. In this symposium we will review the recent advancements in highperformance computing hardware enabling the reduction in computational resources needed for Monte Carlo simulations in medical imaging. We will review variance reduction techniques commonly applied in Monte Carlo simulations of medical imaging systems and present implementation strategies for efficient combination of these techniques with GPU acceleration. Trade-offs involved in Monte Carlo acceleration by means of denoising and “sparse sampling” will be discussed. A method for rapid scatter correction in cone-beam CT (<5 min/scan) will be presented as an illustration of the simulation speeds achievable with optimized Monte Carlo simulations. We will also discuss the development, availability, and capability of the various combinations of computational phantoms for Monte Carlo simulation of medical imaging systems. Finally, we will review some examples of experimental validation of Monte Carlo simulations and will present the AAPM Task Group 195 Report. Learning Objectives: Describe the advances in hardware available for performing Monte Carlo simulations in high performance computing environments. Explain variance reduction, denoising and sparse sampling techniques available for reduction of computational time needed for Monte Carlo simulations of medical imaging. List and compare the computational anthropomorphic phantoms currently available for more accurate assessment of medical imaging parameters in Monte Carlo simulations. Describe experimental methods used for validation of Monte Carlo simulations in medical imaging. Describe the AAPM Task Group 195 Report and its use for validation and teaching of Monte Carlo simulations in medical imaging.« less
Estimation of reproduction number and non stationary spectral analysis of dengue epidemic.
Enduri, Murali Krishna; Jolad, Shivakumar
2017-06-01
In this work we analyze the post monsoon Dengue outbreaks by analyzing the transient and long term dynamics of Dengue incidences and its environmental correlates in Ahmedabad city in western India from 2005 to 2012. We calculate the reproduction number R p using the growth rate of post monsoon Dengue outbreaks and biological parameters like host and vector incubation periods and vector mortality rate, and its uncertainties are estimated through Monte-Carlo simulations by sampling parameters from their respective probability distributions. Reduction in Female Aedes mosquito density required for an effective prevention of Dengue outbreaks is also calculated. The non stationary pattern of Dengue incidences and its climatic correlates like rainfall temperature is analyzed through Wavelet based methods. We find that the mean time lag between peak of monsoon and Dengue is 9 weeks. Monsoon and Dengue cases are phase locked from 2008 to 2012 in the 16 to maintain consistency make it "16 to 32" 32 weeks band. The duration of post monsoon outbreak has been increasing every year, especially post 2008, even though the intensity and duration of monsoon has been decreasing. Temperature and Dengue incidences show correlations in the same band, but phase lock is not stationary. Copyright © 2017 Elsevier Inc. All rights reserved.
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Evidence for higher twist effects in fast π- production by antineutrinos in neon
NASA Astrophysics Data System (ADS)
Fitch, P. J.; Kasper, P.; Cooper-Sarkar, A. M.; Aderholz, M.; Armenise, N.; Azemoon, T.; Bertrand, D.; Berggren, M.; Bullock, F. W.; Calicchio, M.; Clayton, E. F.; Coghen, T.; Erriquez, O.; Gerbier, G.; Guy, J.; Hulth, P. O.; Iaselli, G.; Jones, G. T.; Lagraa, M.; Marage, P.; Middleton, R. P.; Miller, D.; Mobayyen, M. M.; Neveu, M.; O'Neale, S. W.; Parker, M. A.; Sansum, R. A.; Simopoulou, E.; Varvell, K.; Vallée, C.; Vayaki, A.; Venus, W.; Wachsmuth, H.; Wittek, W.; Wells, J.; Zevgolatakos, E.
1986-03-01
Evidence for a significant higher twist contribution to high z π- production in antineutrino scattering is presented. In events with W>3 GeV and Q 2>1 GeV2 in our data, it accounts for (51 ±8)% of all π- with z above 0.5. It is consistent with the z- Q 2 correlations of Berger's higher twist prediction. The data are inconclusive concerning the predicted y-z correlation and p T dependence. The z - Q 2 correlation is not adequately described by the Lund Monte-Carlo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldsiefen, Tim; Cangi, Attila; Eich, F. G.
Here, we derive an intrinsically temperature-dependent approximation to the correlation grand potential for many-electron systems in thermodynamical equilibrium in the context of finite-temperature reduced-density-matrix-functional theory (FT-RDMFT). We demonstrate its accuracy by calculating the magnetic phase diagram of the homogeneous electron gas. We compare it to known limits from highly accurate quantum Monte Carlo calculations as well as to phase diagrams obtained within existing exchange-correlation approximations from density functional theory and zero-temperature RDMFT.
Baldsiefen, Tim; Cangi, Attila; Eich, F. G.; ...
2017-12-18
Here, we derive an intrinsically temperature-dependent approximation to the correlation grand potential for many-electron systems in thermodynamical equilibrium in the context of finite-temperature reduced-density-matrix-functional theory (FT-RDMFT). We demonstrate its accuracy by calculating the magnetic phase diagram of the homogeneous electron gas. We compare it to known limits from highly accurate quantum Monte Carlo calculations as well as to phase diagrams obtained within existing exchange-correlation approximations from density functional theory and zero-temperature RDMFT.
Phase transition in nonuniform Josephson arrays: Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Pomirchy, L. M.
1994-01-01
Disordered 2D system with Josephson interactions is considered. Disordered XY-model describes the granular films, Josephson arrays etc. Two types of disorder are analyzed: (1) randomly diluted system: Josephson coupling constants J ij are equal to J with probability p or zero (bond percolation problem); (2) coupling constants J ij are positive and distributed randomly and uniformly in some interval either including the vicinity of zero or apart from it. These systems are simulated by Monte Carlo method. Behaviour of potential energy, specific heat, phase correlation function and helicity modulus are analyzed. The phase diagram of the diluted system in T c-p plane is obtained.
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
Lecture Notes on Criticality Safety Validation Using MCNP & Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less
Neutron-neutron angular correlations in spontaneous fission of 252Cf and 240Pu
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Nakae, L. F.; Vogt, R.
2018-04-01
Background: Angular anisotropy has been observed between prompt neutrons emitted during the fission process. Such an anisotropy arises because the emitted neutrons are boosted along the direction of the parent fragment. Purpose: To measure the neutron-neutron angular correlations from the spontaneous fission of 252Cf and 240Pu oxide samples using a liquid scintillator array capable of pulse-shape discrimination. To compare these correlations to simulations combining the Monte Carlo radiation transport code MCNPX with the fission event generator FREYA. Method: Two different analysis methods were used to study the neutron-neutron correlations with varying energy thresholds. The first is based on setting a light output threshold while the second imposes a time-of-flight cutoff. The second method has the advantage of being truly detector independent. Results: The neutron-neutron correlation modeled by FREYA depends strongly on the sharing of the excitation energy between the two fragments. The measured asymmetry enabled us to adjust the FREYA parameter x in 240Pu, which controls the energy partition between the fragments and is so far inaccessible in other measurements. The 240Pu data in this analysis was the first available to quantify the energy partition for this isotope. The agreement between data and simulation is overall very good for 252Cf(sf ) and 240Pu(sf ) . Conclusions: The asymmetry in the measured neutron-neutron angular distributions can be predicted by FREYA. The shape of the correlation function depends on how the excitation energy is partitioned between the two fission fragments. Experimental data suggest that the lighter fragment is disproportionately excited.
Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.
ERIC Educational Resources Information Center
Ramsay, J. O.
1980-01-01
Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.
McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C
2017-11-07
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo
NASA Astrophysics Data System (ADS)
McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.
2017-11-01
Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Chalcones in bioactive Argentine propolis collected in arid environments.
Solórzano, Eliana; Vera, Nancy; Cuello, Soledad; Ordoñez, Roxana; Zampini, Catiana; Maldonado, Luis; Bedascarrasbure, Enrique; Isla, María I
2012-07-01
The aim of this study was to assess the chemical and biological profile of propolis samples collected in arid environments of north-western Argentina. The samples were from two phytogeographical regions (Prepuna and Monte de Catamarca Province). Propolis ethanolic extracts (PEE) and chloroform (CHL), hexane (HEX) and aqueous (AQ) sub-extracts of samples from three regions (CAT-I; CAT-II and CAT-III) were obtained. All PEE exhibited antioxidant activity in the DPPH radical scavenging assay (SC50 values between 28 and 43 microg DW/mL). The CHL extract was the most active (SC50 values between 10 and 37 microg DW/mL). The antioxidant activity in the beta-carotene bleaching assays was more effective for PEE and CHL (IC50 values between 2 and 9 microg DW/mL, respectively). A similar pattern was observed for antibacterial activity. The highest inhibitory effect on the growth of human Gram-positive bacteria was observed for CHL-III and CHL-I (Monte region) with minimal inhibitory concentration values (MIC100) of 50 to 100 microg DW/mL. Nine compounds were identified by HPLC-PAD. Two of them (2', 4'- dihydroxychalcone and 2',4'- dihydroxy 3'-methoxychalcone) were found only in propolis samples from the Monte phytogeographical region. We consider that the Argentine arid region is appropriate to place hives in order to obtain propolis of excellent quality because the dominant life forms in that environment are shrubby species that produce resinous exudates with a high content of chalcones, flavones and flavonols.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
Excited states from quantum Monte Carlo in the basis of Slater determinants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humeniuk, Alexander; Mitrić, Roland, E-mail: roland.mitric@uni-wuerzburg.de
2014-11-21
Building on the full configuration interaction quantum Monte Carlo (FCIQMC) algorithm introduced recently by Booth et al. [J. Chem. Phys. 131, 054106 (2009)] to compute the ground state of correlated many-electron systems, an extension to the computation of excited states (exFCIQMC) is presented. The Hilbert space is divided into a large part consisting of pure Slater determinants and a much smaller orthogonal part (the size of which is controlled by a cut-off threshold), from which the lowest eigenstates can be removed efficiently. In this way, the quantum Monte Carlo algorithm is restricted to the orthogonal complement of the lower excitedmore » states and projects out the next highest excited state. Starting from the ground state, higher excited states can be found one after the other. The Schrödinger equation in imaginary time is solved by the same population dynamics as in the ground state algorithm with modified probabilities and matrix elements, for which working formulae are provided. As a proof of principle, the method is applied to lithium hydride in the 3-21G basis set and to the helium dimer in the aug-cc-pVDZ basis set. It is shown to give the correct electronic structure for all bond lengths. Much more testing will be required before the applicability of this method to electron correlation problems of interesting size can be assessed.« less
NASA Astrophysics Data System (ADS)
Deyglun, Clément; Carasco, Cédric; Pérot, Bertrand
2014-06-01
The detection of Special Nuclear Materials (SNM) by neutron interrogation is extensively studied by Monte Carlo simulation at the Nuclear Measurement Laboratory of CEA Cadarache (French Alternative Energies and Atomic Energy Commission). The active inspection system is based on the Associated Particle Technique (APT). Fissions induced by tagged neutrons (i.e. correlated to an alpha particle in the DT neutron generator) in SNM produce high multiplicity coincidences which are detected with fast plastic scintillators. At least three particles are detected in a short time window following the alpha detection, whereas nonnuclear materials mainly produce single events, or pairs due to (n,2n) and (n,n'γ) reactions. To study the performances of an industrial cargo container inspection system, Monte Carlo simulations are performed with the MCNP-PoliMi transport code, which records for each neutron history the relevant information: reaction types, position and time of interactions, energy deposits, secondary particles, etc. The output files are post-processed with a specific tool developed with ROOT data analysis software. Particles not correlated with an alpha particle (random background), counting statistics, and time-energy resolutions of the data acquisition system are taken into account in the numerical model. Various matrix compositions, suspicious items, SNM shielding and positions inside the container, are simulated to assess the performances and limitations of an industrial system.
Rimondi, V.; Gray, J.E.; Costagliola, P.; Vaselli, O.; Lattanzi, P.
2012-01-01
The distribution and translocation of mercury (Hg) was studied in the Paglia River ecosystem, located downstream from the inactive Abbadia San Salvatore mine (ASSM). The ASSM is part of the Monte Amiata Hg district, Southern Tuscany, Italy, which was one of the world’s largest Hg districts. Concentrations of Hg and methyl-Hg were determined in mine-waste calcine (retorted ore), sediment, water, soil, and freshwater fish collected from the ASSM and the downstream Paglia River. Concentrations of Hg in calcine samples ranged from 25 to 1500 μg/g, all of which exceeded the industrial soil contamination level for Hg of 5 μg/g used in Italy. Stream and lake sediment samples collected downstream from the ASSM ranged in Hg concentration from 0.26 to 15 μg/g, of which more than 50% exceeded the probable effect concentration for Hg of 1.06 μg/g, the concentration above which harmful effects are likely to be observed in sediment-dwelling organisms. Stream and lake sediment methyl-Hg concentrations showed a significant correlation with TOC indicating considerable methylation and potential bioavailability of Hg. Stream water contained Hg as high as 1400 ng/L, but only one water sample exceeded the 1000 ng/L drinking water Hg standard used in Italy. Concentrations of Hg were elevated in freshwater fish muscle samples and ranged from 0.16 to 1.2 μg/g (wet weight), averaged 0.84 μg/g, and 96% of these exceeded the 0.3 μg/g (methyl-Hg, wet weight) USEPA fish muscle standard recommended to protect human health. Analysis of fish muscle for methyl-Hg confirmed that > 90% of the Hg in these fish is methyl-Hg. Such highly elevated Hg concentrations in fish indicated active methylation, significant bioavailability, and uptake of Hg by fish in the Paglia River ecosystem. Methyl-Hg is highly toxic and the high Hg concentrations in these fish represent a potential pathway of Hg to the human food chain.
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2012-12-01
Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and real data) are described in this paper to corroborate the methodology and the implementation of these two new programs.
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
NASA Astrophysics Data System (ADS)
Wei, Yiping; Chen, Liru; Zhou, Wei; Chingin, Konstantin; Ouyang, Yongzhong; Zhu, Tenggao; Wen, Hua; Ding, Jianhua; Xu, Jianjun; Chen, Huanwen
2015-05-01
Tissue spray ionization mass spectrometry (TSI-MS) directly on small tissue samples has been shown to provide highly specific molecular information. In this study, we apply this method to the analysis of 38 pairs of human lung squamous cell carcinoma tissue (cancer) and adjacent normal lung tissue (normal). The main components of pulmonary surfactants, dipalmitoyl phosphatidylcholine (DPPC, m/z 757.47), phosphatidylcholine (POPC, m/z 782.52), oleoyl phosphatidylcholine (DOPC, m/z 808.49), and arachidonic acid stearoyl phosphatidylcholine (SAPC, m/z 832.43), were identified using high-resolution tandem mass spectrometry. Monte Carlo sampling partial least squares linear discriminant analysis (PLS-LDA) was used to distinguish full-mass-range mass spectra of cancer samples from the mass spectra of normal tissues. With 5 principal components and 30 - 40 Monte Carlo samplings, the accuracy of cancer identification in matched tissue samples reached 94.42%. Classification of a tissue sample required less than 1 min, which is much faster than the analysis of frozen sections. The rapid, in situ diagnosis with minimal sample consumption provided by TSI-MS is advantageous for surgeons. TSI-MS allows them to make more informed decisions during surgery.
One-sided truncated sequential t-test: application to natural resource sampling
Gary W. Fowler; William G. O' Regan
1974-01-01
A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...
Use of randomized sampling for analysis of metabolic networks.
Schellenberger, Jan; Palsson, Bernhard Ø
2009-02-27
Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.
Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granroth, Garrett E; Chen, Meili; Kohl, James Arthur
2007-01-01
Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less
Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams
NASA Astrophysics Data System (ADS)
Ohya, Kaoru
2017-03-01
The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
Optimized nested Markov chain Monte Carlo sampling: theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Importance Sampling of Word Patterns in DNA and Protein Sequences
Chan, Hock Peng; Chen, Louis H.Y.
2010-01-01
Abstract Monte Carlo methods can provide accurate p-value estimates of word counting test statistics and are easy to implement. They are especially attractive when an asymptotic theory is absent or when either the search sequence or the word pattern is too short for the application of asymptotic formulae. Naive direct Monte Carlo is undesirable for the estimation of small probabilities because the associated rare events of interest are seldom generated. We propose instead efficient importance sampling algorithms that use controlled insertion of the desired word patterns on randomly generated sequences. The implementation is illustrated on word patterns of biological interest: palindromes and inverted repeats, patterns arising from position-specific weight matrices (PSWMs), and co-occurrences of pairs of motifs. PMID:21128856
Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A
2011-07-01
The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do. © 2011 Acoustical Society of America
Collective translational and rotational Monte Carlo cluster move for general pairwise interaction
NASA Astrophysics Data System (ADS)
Růžička, Štěpán; Allen, Michael P.
2014-09-01
Virtual move Monte Carlo is a cluster algorithm which was originally developed for strongly attractive colloidal, molecular, or atomistic systems in order to both approximate the collective dynamics and avoid sampling of unphysical kinetic traps. In this paper, we present the algorithm in the form, which selects the moving cluster through a wider class of virtual states and which is applicable to general pairwise interactions, including hard-core repulsion. The newly proposed way of selecting the cluster increases the acceptance probability by up to several orders of magnitude, especially for rotational moves. The results have their applications in simulations of systems interacting via anisotropic potentials both to enhance the sampling of the phase space and to approximate the dynamics.
NASA Astrophysics Data System (ADS)
Kirillin, M. Yu; Priezzhev, A. V.; Hast, J.; Myllylä, Risto
2006-02-01
Signals of an optical coherence tomograph from paper samples are calculated by the Monte Carlo method before and after the action of different immersion liquids such as ethanol, glycerol, benzyl alcohol, and 1-pentanol. It is shown within the framework of the model used that all these liquids reduce the contrast of the inhomogeneity image in upper layers of the samples, considerably improving, however, the visibility of lower layers, allowing the localisation of the rear boundary of a medium being probed, which is important for precision contactless measuring a paper sheet thickness, for example, during the manufacturing process. The results of calculations are in well agreement with experimental data.
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
Calculation of self–shielding factor for neutron activation experiments using GEANT4 and MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero–Barrientos, Jaime, E-mail: jaromero@ing.uchile.cl; Universidad de Chile, DFI, Facultad de Ciencias Físicas Y Matemáticas, Avenida Blanco Encalada 2008, Santiago; Molina, F.
2016-07-07
The neutron self–shielding factor G as a function of the neutron energy was obtained for 14 pure metallic samples in 1000 isolethargic energy bins from 1·10{sup −5}eV to 2·10{sup 7}eV using Monte Carlo simulations in GEANT4 and MCNP6. The comparison of these two Monte Carlo codes shows small differences in the final self–shielding factor mostly due to the different cross section databases that each program uses.
Multilevel ensemble Kalman filtering
Hoel, Hakon; Law, Kody J. H.; Tempone, Raul
2016-06-14
This study embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. Finally, the resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan
2013-04-01
We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.
NASA Astrophysics Data System (ADS)
Loh, Y. L.; Yao, D. X.; Carlson, E. W.
2008-04-01
A new class of two-dimensional magnetic materials Cu9X2(cpa)6ṡxH2O ( cpa=2 -carboxypentonic acid; X=F,Cl,Br ) was recently fabricated in which Cu sites form a triangular kagome lattice (TKL). As the simplest model of geometric frustration in such a system, we study the thermodynamics of Ising spins on the TKL using exact analytic method as well as Monte Carlo simulations. We present the free energy, internal energy, specific heat, entropy, sublattice magnetizations, and susceptibility. We describe the rich phase diagram of the model as a function of coupling constants, temperature, and applied magnetic field. For frustrated interactions in the absence of applied field, the ground state is a spin liquid phase with residual entropy per spin s0/kB=(1)/(9)ln72≈0.4752… . In weak applied field, the system maps to the dimer model on a honeycomb lattice, with residual entropy 0.0359 per spin and quasi-long-range order with power-law spin-spin correlations that should be detectable by neutron scattering. The power-law correlations become exponential at finite temperatures, but the correlation length may still be long.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Statistical methods for astronomical data with upper limits. II - Correlation and regression
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Nelson, P. I.
1986-01-01
Statistical methods for calculating correlations and regressions in bivariate censored data where the dependent variable can have upper or lower limits are presented. Cox's regression and the generalization of Kendall's rank correlation coefficient provide significant levels of correlations, and the EM algorithm, under the assumption of normally distributed errors, and its nonparametric analog using the Kaplan-Meier estimator, give estimates for the slope of a regression line. Monte Carlo simulations demonstrate that survival analysis is reliable in determining correlations between luminosities at different bands. Survival analysis is applied to CO emission in infrared galaxies, X-ray emission in radio galaxies, H-alpha emission in cooling cluster cores, and radio emission in Seyfert galaxies.
Petscher, Yaacov; Schatschneider, Christopher
2011-01-01
Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs.
Petscher, Yaacov; Schatschneider, Christopher
2015-01-01
Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs. PMID:26379310
Li, Ying; Li, Jinhui; Deng, Chao
2014-01-01
Raw leachate samples were collected from various municipal solid waste (MSW) landfills in a densely populated city in North China to measure the levels and compositional patterns of polybrominated diphenyl ethers (PBDEs) in leachate. The total concentration of PBDEs ranged from 4.0 to 351.2 ng/L, with an average of 73.0 ng/L. BDE-209 dominated the congeners in most of the samples, followed by BDE-47 and -99. Higher PBDEs concentrations were found in leachate from younger landfill facilities in the urban area. Pearson correlation analysis implied a potential dependence of the PBDEs level on landfill age, suspended solids and dissolved organic carbon, while the results of principal component analysis (PCA) suggested potential origins and transportation of PBDEs in leachate. The Monte Carlo method was adopted to estimate the annual leakage of PBDEs into the underground environment nationwide, based on two main scenarios: simple landfills with inadequate liner systems and composite-lined landfills with defective geomembranes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Aerodynamics of Stardust Sample Return Capsule
NASA Technical Reports Server (NTRS)
Mitcheltree, R. A.; Wilmoth, R. G.; Cheatwood, F. M.; Brauckmann, G. J.; Greene, F. A.
1997-01-01
Successful return of interstellar dust and cometary material by the Stardust Sample Return Capsule requires an accurate description of the Earth entry vehicle's aerodynamics. This description must span the hypersonic-rarefied, hypersonic-continuum, supersonic, transonic, and subsonic flow regimes. Data from numerous sources are compiled to accomplish this objective. These include Direct Simulation Monte Carlo analyses, thermochemical nonequilibrium computational fluid dynamics, transonic computational fluid dynamics, existing wind tunnel data, and new wind tunnel data. Four observations are highlighted: 1) a static instability is revealed in the free-molecular and early transitional-flow regime due to aft location of the vehicle s center-of-gravity, 2) the aerodynamics across the hypersonic regime are compared with the Newtonian flow approximation and a correlation between the accuracy of the Newtonian flow assumption and the sonic line position is noted, 3) the primary effect of shape change due to ablation is shown to be a reduction in drag, and 4) a subsonic dynamic instability is revealed which will necessitate either a change in the vehicle s center-of-gravity location or the use of a stabilizing drogue parachute.
A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A
2016-01-01
This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.
Miniregoliths. I - Dusty lunar rocks and lunar soil layers
NASA Technical Reports Server (NTRS)
Comstock, G. M.
1978-01-01
A detailed Monte-Carlo model for rock surface evolution shows that erosion processes alone cannot account for the shapes of the solar flare particle track profiles generally observed at depths of about 100 microns and less in rocks. The observed profiles are easily explained by a steady accumulation of fine dust at a rate of 0.3 to 3 mm per m.y., depending on the micrometeoroid impact rate which controls the dust cover and results in maximum dust thicknesses on the order of 100 microns to 1 mm. The commonly used lunar soil track parameters are derived in terms of parameters characterizing the exposure of soil grains in the few-millimeter-thick surface mixing and maturation zone which is one form of miniregolith. Correlation plots permit determining the degree of mixing in soil samples and the amount of processing (maturation) in surface miniregoliths. It is shown that the sampling process often artificially mixes together finer distinct layers, and that ancient miniregolith layers on the order of a millimeter thick are probably common in the lunar soil.
Application of Monte Carlo algorithms to the Bayesian analysis of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, J.; Levin, S.; Anderson, C. H.
2004-01-01
Power spectrum estimation and evaluation of associated errors in the presence of incomplete sky coverage; nonhomogeneous, correlated instrumental noise; and foreground emission are problems of central importance for the extraction of cosmological information from the cosmic microwave background (CMB).
Proton Linear Energy Transfer measurement using Emulsion Cloud Chamber
NASA Astrophysics Data System (ADS)
Shin, Jae-ik; Park, Seyjoon; Kim, Haksoo; Kim, Meyoung; Jeong, Chiyoung; Cho, Sungkoo; Lim, Young Kyung; Shin, Dongho; Lee, Se Byeong; Morishima, Kunihiro; Naganawa, Naotaka; Sato, Osamu; Kwak, Jungwon; Kim, Sung Hyun; Cho, Jung Sook; Ahn, Jung Keun; Kim, Ji Hyun; Yoon, Chun Sil; Incerti, Sebastien
2015-04-01
This study proposes to determine the correlation between the Volume Pulse Height (VPH) measured by nuclear emulsion and Linear Energy Transfer (LET) calculated by Monte Carlo simulation based on Geant4. The nuclear emulsion was irradiated at the National Cancer Center (NCC) with a therapeutic proton beam and was installed at 5.2 m distance from the beam nozzle structure with various thicknesses of water-equivalent material (PMMA) blocks to position with specific positions along the Bragg curve. After the beam exposure and development of the emulsion films, the films were scanned by S-UTS developed in Nagoya University. The proton tracks in the scanned films were reconstructed using the 'NETSCAN' method. Through this procedure, the VPH can be derived from each reconstructed proton track at each position along the Bragg curve. The VPH value indicates the magnitude of energy loss in proton track. By comparison with the simulation results obtained using Geant4, we found the correlation between the LET calculated by Monte Carlo simulation and the VPH measured by the nuclear emulsion.
Zirconia and its allotropes; A Quantum Monte Carlo study
NASA Astrophysics Data System (ADS)
Jokisaari, Andrea; Benali, Anouar; Shin, Hyeondeok; Luo, Ye; Lopez Bezanilla, Alejandro; Ratcliff, Laura; Littlewood, Peter; Heinonen, Olle
With a high strength and stability at elevated temperatures, Zirconia (zirconium dioxide) is one of the best corrosion-resistant and refractive materials used in metallurgy, and is used in structural ceramics, catalytic converters, oxygen sensors, nuclear industry, and in chemically passivating surfaces. The wide range of applications of ZrO2 has motivated a large number of electronic structures studies of its known allotropes (monoclinic, tetragonal and cubic). Density Functional Theory has been successful at reproducing some of the fundamental properties of some of the allotropes, but these results remain dependent on the specific combination of exchange-correlation functional and type of pseudopotentials, making any type of structural prediction or defect analysis uncertain. Quantum Monte Carlo (QMC) is a many-body quantum theory solving explicitly the electronic correlations, allowing reproducing and predicting materials properties with a limited number of controlled approximations. In this study, we use QMC to revisit the energetic stability of Zirconia's allotropes and compare our results with those obtained from density functional theory.
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
THE DiskMass SURVEY. III. STELLAR KINEMATICS VIA CROSS-CORRELATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, Kyle B.; Bershady, Matthew A.; Verheijen, Marc A. W., E-mail: westfall@astro.rug.nl, E-mail: mab@astro.wisc.edu, E-mail: verheyen@astro.rug.nl
2011-03-15
We describe a new cross-correlation (CC) approach used by our survey to derive stellar kinematics from galaxy-continuum spectroscopy. This approach adopts the formal error analysis derived by Statler, but properly handles spectral masks. Thus, we address the primary concerns regarding application of the CC method to censored data, while maintaining its primary advantage by consolidating kinematic and template-mismatch information toward different regions of the CC function. We identify a systematic error in the nominal CC method of approximately 10% in velocity dispersion incurred by a mistreatment of detector-censored data, which is eliminated by our new method. We derive our approachmore » from first principles, and we use Monte Carlo simulations to demonstrate its efficacy. An identical set of Monte Carlo simulations performed using the well-established penalized-pixel-fitting code of Cappellari and Emsellem compares favorably with the results from our newly implemented software. Finally, we provide a practical demonstration of this software by extracting stellar kinematics from SparsePak spectra of UGC 6918.« less
NASA Astrophysics Data System (ADS)
Ruggeri, Michele; Luo, Hongjun; Alavi, Ali
Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is able to give remarkably accurate results in the study of atoms and molecules. The study of the uniform electron gas (UEG) on the other hand has proven to be much harder, particularly in the low density regime. The source of this difficulty comes from the strong interparticle correlations that arise at low density, and essentially forbid the study of the electron gas in proximity of Wigner crystallization. We extend a previous study on the three dimensional electron gas computing the energy of a fully polarized gas for N=27 electrons at high and medium density (rS = 0 . 5 to 5 . 0). We show that even when dealing with a polarized UEG the computational cost of the study of systems with rS > 5 . 0 is prohibitive; in order to deal with correlations and to extend the density range that to be studied we introduce a basis of localized states and an effective transcorrelated Hamiltonian.
Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water
Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...
2016-08-13
In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less
NASA Astrophysics Data System (ADS)
Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.
2018-05-01
Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although the observed seasonal cycles were found to fall within the confidence limits of the ACCMIP members, this was because the model seasonal cycles spanned extremely wide ranges and there was no single ACCMIP member that performed best for each station. Further work is required to examine the parameterisation of convective mixing in the models to see if this erodes the isolation of the marine boundary layer from the free troposphere and thus hides the models' real ability to reproduce ozone seasonal cycles over marine stations.
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method.
Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong
2017-02-28
The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Auxiliary-field quantum Monte Carlo simulations of neutron matter in chiral effective field theory.
Wlazłowski, G; Holt, J W; Moroz, S; Bulgac, A; Roche, K J
2014-10-31
We present variational Monte Carlo calculations of the neutron matter equation of state using chiral nuclear forces. The ground-state wave function of neutron matter, containing nonperturbative many-body correlations, is obtained from auxiliary-field quantum Monte Carlo simulations of up to about 340 neutrons interacting on a 10(3) discretized lattice. The evolution Hamiltonian is chosen to be attractive and spin independent in order to avoid the fermion sign problem and is constructed to best reproduce broad features of the chiral nuclear force. This is facilitated by choosing a lattice spacing of 1.5 fm, corresponding to a momentum-space cutoff of Λ=414 MeV/c, a resolution scale at which strongly repulsive features of nuclear two-body forces are suppressed. Differences between the evolution potential and the full chiral nuclear interaction (Entem and Machleidt Λ=414 MeV [L. Coraggio et al., Phys. Rev. C 87, 014322 (2013).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertholon, François; Harant, Olivier; Bourlon, Bertrand
This article introduces a joined Bayesian estimation of gas samples issued from a gas chromatography column (GC) coupled with a NEMS sensor based on Giddings Eyring microscopic molecular stochastic model. The posterior distribution is sampled using a Monte Carlo Markov Chain and Gibbs sampling. Parameters are estimated using the posterior mean. This estimation scheme is finally applied on simulated and real datasets using this molecular stochastic forward model.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
Electrostatic correlations in inhomogeneous charged fluids beyond loop expansion
NASA Astrophysics Data System (ADS)
Buyukdagli, Sahin; Achim, C. V.; Ala-Nissila, T.
2012-09-01
Electrostatic correlation effects in inhomogeneous symmetric electrolytes are investigated within a previously developed electrostatic self-consistent theory [R. R. Netz and H. Orland, Eur. Phys. J. E 11, 301 (2003)], 10.1140/epje/i2002-10159-0. To this aim, we introduce two computational approaches that allow to solve the self-consistent equations beyond the loop expansion. The first method is based on a perturbative Green's function technique, and the second one is an extension of a previously introduced semiclassical approximation for single dielectric interfaces to the case of slit nanopores. Both approaches can handle the case of dielectrically discontinuous boundaries where the one-loop theory is known to fail. By comparing the theoretical results obtained from these schemes with the results of the Monte Carlo simulations that we ran for ions at neutral single dielectric interfaces, we first show that the weak coupling Debye-Huckel theory remains quantitatively accurate up to the bulk ion density ρb ≃ 0.01 M, whereas the self-consistent theory exhibits a good quantitative accuracy up to ρb ≃ 0.2 M, thus improving the accuracy of the Debye-Huckel theory by one order of magnitude in ionic strength. Furthermore, we compare the predictions of the self-consistent theory with previous Monte Carlo simulation data for charged dielectric interfaces and show that the proposed approaches can also accurately handle the correlation effects induced by the surface charge in a parameter regime where the mean-field result significantly deviates from the Monte Carlo data. Then, we derive from the perturbative self-consistent scheme the one-loop theory of asymmetrically partitioned salt systems around a dielectrically homogeneous charged surface. It is shown that correlation effects originate in these systems from a competition between the salt screening loss at the interface driving the ions to the bulk region, and the interfacial counterion screening excess attracting them towards the surface. This competition can be quantified in terms of the characteristic surface charge σ _s^*=√{2ρ _b/(π ℓ _B)}, where ℓB = 7 Å is the Bjerrum length. In the case of weak surface charges σ _s≪ σ _s^* where counterions form a diffuse layer, the interfacial salt screening loss is the dominant effect. As a result, correlation effects decrease the mean-field density of both coions and counterions. With an increase of the surface charge towards σ _s^*, the surface-attractive counterion screening excess starts to dominate, and correlation effects amplify in this regime the mean-field density of both type of ions. However, in the regime σ _s>σ _s^*, the same counterion screening excess also results in a significant decrease of the electrostatic mean-field potential. This reduces in turn the mean-field counterion density far from the charged surface. We also show that for σ _s≫ σ _s^*, electrostatic correlations result in a charge inversion effect. However, the electrostatic coupling regime where this phenomenon takes place should be verified with Monte Carlo simulations since this parameter regime is located beyond the validity range of the one-loop theory.
Electrostatic correlations in inhomogeneous charged fluids beyond loop expansion.
Buyukdagli, Sahin; Achim, C V; Ala-Nissila, T
2012-09-14
Electrostatic correlation effects in inhomogeneous symmetric electrolytes are investigated within a previously developed electrostatic self-consistent theory [R. R. Netz and H. Orland, Eur. Phys. J. E 11, 301 (2003)]. To this aim, we introduce two computational approaches that allow to solve the self-consistent equations beyond the loop expansion. The first method is based on a perturbative Green's function technique, and the second one is an extension of a previously introduced semiclassical approximation for single dielectric interfaces to the case of slit nanopores. Both approaches can handle the case of dielectrically discontinuous boundaries where the one-loop theory is known to fail. By comparing the theoretical results obtained from these schemes with the results of the Monte Carlo simulations that we ran for ions at neutral single dielectric interfaces, we first show that the weak coupling Debye-Huckel theory remains quantitatively accurate up to the bulk ion density ρ(b) ≃ 0.01 M, whereas the self-consistent theory exhibits a good quantitative accuracy up to ρ(b) ≃ 0.2 M, thus improving the accuracy of the Debye-Huckel theory by one order of magnitude in ionic strength. Furthermore, we compare the predictions of the self-consistent theory with previous Monte Carlo simulation data for charged dielectric interfaces and show that the proposed approaches can also accurately handle the correlation effects induced by the surface charge in a parameter regime where the mean-field result significantly deviates from the Monte Carlo data. Then, we derive from the perturbative self-consistent scheme the one-loop theory of asymmetrically partitioned salt systems around a dielectrically homogeneous charged surface. It is shown that correlation effects originate in these systems from a competition between the salt screening loss at the interface driving the ions to the bulk region, and the interfacial counterion screening excess attracting them towards the surface. This competition can be quantified in terms of the characteristic surface charge σ(s)*=√(2ρ(b)/(πl(B)), where l(B) = 7 Å is the Bjerrum length. In the case of weak surface charges σ(s)≪σ(s)* where counterions form a diffuse layer, the interfacial salt screening loss is the dominant effect. As a result, correlation effects decrease the mean-field density of both coions and counterions. With an increase of the surface charge towards σ(s)*, the surface-attractive counterion screening excess starts to dominate, and correlation effects amplify in this regime the mean-field density of both type of ions. However, in the regime σ(s)>σ(s)*, the same counterion screening excess also results in a significant decrease of the electrostatic mean-field potential. This reduces in turn the mean-field counterion density far from the charged surface. We also show that for σ(s)≫σ(s)*, electrostatic correlations result in a charge inversion effect. However, the electrostatic coupling regime where this phenomenon takes place should be verified with Monte Carlo simulations since this parameter regime is located beyond the validity range of the one-loop theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuomas, V.; Jaakko, L.
This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomesmore » a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)« less
Annealed Importance Sampling Reversible Jump MCMC algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less
NASA Astrophysics Data System (ADS)
Mugnier, J. L.; Godon, C.; Buoncristiani, J. F.; Paquette, J. L.; Trouvé, E.
2012-04-01
The efficiency of erosional processes is classically considered from detrital composition at the outlet of a shed that reflects the rocks eroded within the shed. We adapt fluvial detrital thermochronology (DeCelles et al., 2004) and lithology (Attal and Lavé, 2006) methods to the subglacial streams of the north face of the Mont Blanc. The lithology of this area is composed by a ~303 Ma old granite intruded within an older poly metamorphic complex (orthogneisses). In this study,we use macroscopic criteria (~10 000 clasts) and Ur/Pb dating of zircons (~500 datings of sand grains) to determine the provenance of the sediment transported by the glacier and by the sub-glacial streams. Samples come from sediments collected around the glacier (above, below or laterally), from different bedrocks sources according to the surface flow lines and glacier characteristics (above or below the ELA; temperate or cold), and from different subglacial streams. A comparison between the proportion of granite and orthogneisses in these samples indicates that: 1) the supra load follows the flow lines of the glacier deduced from SAR images correlation and the displacement pattern excludes supra load mixing of the different sources; 2) the transport by the glacier does not mix the clasts issued from the sub-glacial erosion with the clasts issued from supraglacial deposition, except in the lower tongue where supraglacial streams and moulins move the supraglacial load from top to bottom; 3) the erosion rate beneath the glacier is very small: null beneath the cold ice but also very weak beneath the greatest part of the temperate glacier; the erosion increases significantly beneath the tongue, where supraglacial load incorporated at the base favors abrasion; 4) the glacial erosion rate beneath the tongue remains at least five time smaller than the erosion rate coming from non-glacial area. According to our results, we demonstrate that the glaciers of the Mont-Blanc north face protect the top of Europe from erosion. DeCelles et al., 2004, Earth and Planetary Science Letters, v. 227, p. 313-330. Attal and Lavé, 2006, Geol. Soc. Am. Spec. Publ. (S.D. Willett, N. Hovius, M.T. Brandon and D. Fisher, eds.), 398, p. 143-171.
Gill, Samuel C; Lim, Nathan M; Grinaway, Patrick B; Rustenburg, Ariën S; Fass, Josh; Ross, Gregory A; Chodera, John D; Mobley, David L
2018-05-31
Accurately predicting protein-ligand binding affinities and binding modes is a major goal in computational chemistry, but even the prediction of ligand binding modes in proteins poses major challenges. Here, we focus on solving the binding mode prediction problem for rigid fragments. That is, we focus on computing the dominant placement, conformation, and orientations of a relatively rigid, fragment-like ligand in a receptor, and the populations of the multiple binding modes which may be relevant. This problem is important in its own right, but is even more timely given the recent success of alchemical free energy calculations. Alchemical calculations are increasingly used to predict binding free energies of ligands to receptors. However, the accuracy of these calculations is dependent on proper sampling of the relevant ligand binding modes. Unfortunately, ligand binding modes may often be uncertain, hard to predict, and/or slow to interconvert on simulation time scales, so proper sampling with current techniques can require prohibitively long simulations. We need new methods which dramatically improve sampling of ligand binding modes. Here, we develop and apply a nonequilibrium candidate Monte Carlo (NCMC) method to improve sampling of ligand binding modes. In this technique, the ligand is rotated and subsequently allowed to relax in its new position through alchemical perturbation before accepting or rejecting the rotation and relaxation as a nonequilibrium Monte Carlo move. When applied to a T4 lysozyme model binding system, this NCMC method shows over 2 orders of magnitude improvement in binding mode sampling efficiency compared to a brute force molecular dynamics simulation. This is a first step toward applying this methodology to pharmaceutically relevant binding of fragments and, eventually, drug-like molecules. We are making this approach available via our new Binding modes of ligands using enhanced sampling (BLUES) package which is freely available on GitHub.
Sun, Jun; Zhou, Xin; Wu, Xiaohong; Zhang, Xiaodong; Li, Qinglin
2016-02-26
Fast identification of moisture content in tobacco plant leaves plays a key role in the tobacco cultivation industry and benefits the management of tobacco plant in the farm. In order to identify moisture content of tobacco plant leaves in a fast and nondestructive way, a method involving Mahalanobis distance coupled with Monte Carlo cross validation(MD-MCCV) was proposed to eliminate outlier sample in this study. The hyperspectral data of 200 tobacco plant leaf samples of 20 moisture gradients were obtained using FieldSpc(®) 3 spectrometer. Savitzky-Golay smoothing(SG), roughness penalty smoothing(RPS), kernel smoothing(KS) and median smoothing(MS) were used to preprocess the raw spectra. In addition, Mahalanobis distance(MD), Monte Carlo cross validation(MCCV) and Mahalanobis distance coupled to Monte Carlo cross validation(MD-MCCV) were applied to select the outlier sample of the raw spectrum and four smoothing preprocessing spectra. Successive projections algorithm (SPA) was used to extract the most influential wavelengths. Multiple Linear Regression (MLR) was applied to build the prediction models based on preprocessed spectra feature in characteristic wavelengths. The results showed that the preferably four prediction model were MD-MCCV-SG (Rp(2) = 0.8401 and RMSEP = 0.1355), MD-MCCV-RPS (Rp(2) = 0.8030 and RMSEP = 0.1274), MD-MCCV-KS (Rp(2) = 0.8117 and RMSEP = 0.1433), MD-MCCV-MS (Rp(2) = 0.9132 and RMSEP = 0.1162). MD-MCCV algorithm performed best among MD algorithm, MCCV algorithm and the method without sample pretreatment algorithm in the eliminating outlier sample from 20 different moisture gradients of tobacco plant leaves and MD-MCCV can be used to eliminate outlier sample in the spectral preprocessing. Copyright © 2016 Elsevier Inc. All rights reserved.
Simple and Accurate Method for Central Spin Problems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Manolopoulos, David E.
2018-06-01
We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.
Magnetic properties of the honeycomb oxide Na 2 Co 2 TeO 6
Lefrançois, E.; Songvilay, M.; Robert, J.; ...
2016-12-14
We have studied the magnetic properties of Na 2 Co 2 TeO 6 , which features a honeycomb lattice of magnetic Co 2 + ions, through macroscopic characterization and neutron diffraction on a powder sample. We also show that this material orders in a zigzag antiferromagnetic structure. Additionally by allowing a linear magnetoelectric coupling, this magnetic arrangement displays very peculiar spatial magnetic correlations, larger in the honeycomb planes than between the planes, which do not evolve with the temperature. We have investigated this behavior by classical Monte Carlo calculations using the J 1 - J 2 - Jmore » 3 model on a honeycomb lattice with a small interplane interaction. Furthermore, our model reproduces the experimental neutron structure factor, although its absence of temperature evolution must be due to additional ingredients, such as chemical disorder or quantum fluctuations enhanced by the proximity to a phase boundary.« less
Chia -Hsun Chuang; Pellejero-Ibanez, Marco; Rodriguez-Torres, Sergio; ...
2016-06-26
We analyze the broad-range shape of the monopole and quadrupole correlation functions of the BOSS Data Release 12 (DR12) CMASS and LOWZ galaxy sample to obtain constraints on the Hubble expansion rate H(z), the angular-diameter distance DA(z), the normalised growth rate f(z)σ 8(z), and the physical matter density Ω mh 2. In addition, we adopt wide and flat priors on all model parameters in order to ensure the results are those of a `single-probe' galaxy clustering analysis. We also marginalize over three nuisance terms that account for potential observational systematics affecting the measured monopole. However, such Monte Carlo Markov Chainmore » analysis is computationally expensive for advanced theoretical models, thus we develop a new methodology to speed up our analysis.« less
Martins, Andréa Maria Eleutério de Barros Lima; da Costa, Fernanda Marques; Ferreira, Raquel Conceição; dos Santos Neto, Pedro Eleutério; de Magalhaes, Tatiana Almeida; de Sá, Maria Aparecida Barbosa; Pordeus, Isabela Almeida
2015-01-01
Cross-sectional study conducted among workers of the Family Health Strategy Montes Claros. To investigate the report of vaccination against Hepatitis B, verification of immunization and the factors associated with dosages of anti-HBs. We collected blood samples from those reported that they had one or more doses of the vaccine. We evaluated the association of the dosage of anti- HBs with sociodemographic conditions, occupational and behavioral. The associations were verified by Mann Whitney and Kruskal Wallis and correlation Spermann by linear regression using SPSS® 17.0. Among the 761 respondents, 504 (66.1%) were vaccinated, 52.5 % received three doses, 30.4 % verified immunization. Of the 397 evaluated for the determination of anti-Hbs, 16.4% were immune. It was found that longer duration of work was associated with higher levels of anti-HBs, while levels of smoking were inversely associated with anti-HBs. These workers need for vaccination campaigns.
Identifying the Bottom Line after a Stock Market Crash
NASA Astrophysics Data System (ADS)
Roehner, B. M.
In this empirical paper we show that in the months following a crash there is a distinct connection between the fall of stock prices and the increase in the range of interest rates for a sample of bonds. This variable, which is often referred to as the interest rate spread variable, can be considered as a statistical measure for the disparity in lenders' opinions about the future; in other words, it provides an operational definition of the uncertainty faced by economic agents. The observation that there is a strong negative correlation between stock prices and the spread variable relies on the examination of eight major crashes in the United States between 1857 and 1987. That relationship which has remained valid for one and a half century in spite of important changes in the organization of financial markets can be of interest in the perspective of Monte Carlo simulations of stock markets.
On coupling fluid plasma and kinetic neutral physics models
Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...
2017-03-01
The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less
The Overdense Environments of WISE-selected, ultra-luminous, high-redshift AGN in the submillimetre
NASA Astrophysics Data System (ADS)
Jones, Suzy F.
2017-11-01
The environments around WISE-selected hot dust obscured galaxies (Hot DOGs) and WISE/radio-selected active galactic nuclei (AGNs) at average redshifts of z = 2.7 and z = 1.7, respectively, were found to have overdensities of companion submillimetre-selected sources. The overdensities were of ˜ 2 - 3 and ˜ 5 - 6 , respectively, compared with blank field submm surveys. The space densities in both samples were found to be overdense compared to normal star-forming galaxies and submillimetre galaxies (SMGs). All of the companion sources have consistent mid-IR colours and mid-IR to submm ratios to SMGs. Monte Carlo simulations show no angular correlation, which could indicate protoclusters on scales larger than the SCUBA-2 1.5 arcmin scale maps. WISE-selected AGNs appear to be good indicators of overdense areas of active galaxies at high redshift.
Two-particle correlation function and dihadron correlation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.
It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
Interferometry correlations in central p+Pb collisions
NASA Astrophysics Data System (ADS)
Bożek, Piotr; Bysiak, Sebastian
2018-01-01
We present results on interferometry correlations for pions emitted in central p+Pb collisions at √{s_{NN}}=5.02 TeV in a 3+1-dimensional viscous hydrodynamic model with initial conditions from the Glauber Monte Carlo model. The correlation function is calculated as a function of the pion pair rapidity. The extracted interferometry radii show a weak rapidity dependence, reflecting the lack of boost invariance of the pion distribution. A cross term between the out and long directions is found to be nonzero. The results obtained in the hydrodynamic model are in fair agreement with recent data of the ATLAS Collaboration.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Comparison of correlated correlations.
Cohen, A
1989-12-01
We consider a problem where kappa highly correlated variables are available, each being a candidate for predicting a dependent variable. Only one of the kappa variables can be chosen as a predictor and the question is whether there are significant differences in the quality of the predictors. We review several tests derived previously and propose a method based on the bootstrap. The motivating medical problem was to predict 24 hour proteinuria by protein-creatinine ratio measured at either 08:00, 12:00 or 16:00. The tests which we discuss are illustrated by this example and compared using a small Monte Carlo study.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Yoo, Brian; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Jusufi, Arben; Maginn, Edward J
2017-09-26
We present a newly developed Monte Carlo scheme to predict bulk surfactant concentrations and surface tensions at the air-water interface for various surfactant interfacial coverages. Since the concentration regimes of these systems of interest are typically very dilute (≪10 -5 mol. frac.), Monte Carlo simulations with the use of insertion/deletion moves can provide the ability to overcome finite system size limitations that often prohibit the use of modern molecular simulation techniques. In performing these simulations, we use the discrete fractional component Monte Carlo (DFCMC) method in the Gibbs ensemble framework, which allows us to separate the bulk and air-water interface into two separate boxes and efficiently swap tetraethylene glycol surfactants C 10 E 4 between boxes. Combining this move with preferential translations, volume biased insertions, and Wang-Landau biasing vastly enhances sampling and helps overcome the classical "insertion problem", often encountered in non-lattice Monte Carlo simulations. We demonstrate that this methodology is both consistent with the original molecular thermodynamic theory (MTT) of Blankschtein and co-workers, as well as their recently modified theory (MD/MTT), which incorporates the results of surfactant infinite dilution transfer free energies and surface tension calculations obtained from molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Solid-propellant rocket motor ballistic performance variation analyses
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1975-01-01
Results are presented of research aimed at improving the assessment of off-nominal internal ballistic performance including tailoff and thrust imbalance of two large solid-rocket motors (SRMs) firing in parallel. Previous analyses using the Monte Carlo technique were refined to permit evaluation of the effects of radial and circumferential propellant temperature gradients. Sample evaluations of the effect of the temperature gradients are presented. A separate theoretical investigation of the effect of strain rate on the burning rate of propellant indicates that the thermoelastic coupling may cause substantial variations in burning rate during highly transient operating conditions. The Monte Carlo approach was also modified to permit the effects on performance of variation in the characteristics between lots of propellants and other materials to be evaluated. This permits the variabilities for the total SRM population to be determined. A sample case shows, however, that the effect of these between-lot variations on thrust imbalances within pairs of SRMs is minor in compariosn to the effect of the within-lot variations. The revised Monte Carlo and design analysis computer programs along with instructions including format requirements for preparation of input data and illustrative examples are presented.
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less
Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.
2012-01-01
Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086
Reboredo, Fernando A; Kim, Jeongnim
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
NASA Astrophysics Data System (ADS)
Reboredo, Fernando A.; Kim, Jeongnim
2014-02-01
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Parameter Accuracy in Meta-Analyses of Factor Structures
ERIC Educational Resources Information Center
Gnambs, Timo; Staufenbiel, Thomas
2016-01-01
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
A Kepler study of starspot lifetimes with respect to light-curve amplitude and spectral type
NASA Astrophysics Data System (ADS)
Giles, Helen A. C.; Collier Cameron, Andrew; Haywood, Raphaëlle D.
2017-12-01
Wide-field high-precision photometric surveys such as Kepler have produced reams of data suitable for investigating stellar magnetic activity of cooler stars. Starspot activity produces quasi-sinusoidal light curves whose phase and amplitude vary as active regions grow and decay over time. Here we investigate, first, whether there is a correlation between the size of starspots - assumed to be related to the amplitude of the sinusoid - and their decay time-scale and, secondly, whether any such correlation depends on the stellar effective temperature. To determine this, we computed the auto-correlation functions of the light curves of samples of stars from Kepler and fitted them with apodised periodic functions. The light-curve amplitudes, representing spot size, were measured from the root-mean-squared scatter of the normalized light curves. We used a Monte Carlo Markov Chain to measure the periods and decay time-scales of the light curves. The results show a correlation between the decay time of starspots and their inferred size. The decay time also depends strongly on the temperature of the star. Cooler stars have spots that last much longer, in particular for stars with longer rotational periods. This is consistent with current theories of diffusive mechanisms causing starspot decay. We also find that the Sun is not unusually quiet for its spectral type - stars with solar-type rotation periods and temperatures tend to have (comparatively) smaller starspots than stars with mid-G or later spectral types.
An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation
NASA Astrophysics Data System (ADS)
McLean, N. M.; Bowring, J. F.; Bowring, S. A.
2011-06-01
High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.
Long-term optical flux and colour variability in quasars
NASA Astrophysics Data System (ADS)
Sukanya, N.; Stalin, C. S.; Jeyakumar, S.; Praveen, D.; Dhani, Arnab; Damle, R.
2016-02-01
We have used optical V and R band observations from the Massive Compact Halo Object (MACHO) project on a sample of 59 quasars behind the Magellanic clouds to study their long term optical flux and colour variations. These quasars, lying in the redshift range of 0.2 < z < 2.8 and having apparent V band magnitudes between 16.6 and 20.1 mag, have observations ranging from 49 to 1353 epochs spanning over 7.5 yr with frequency of sampling between 2 to 10 days. All the quasars show variability during the observing period. The normalised excess variance (Fvar) in V and R bands are in the range 0.2% < FVvar < 1.6% and 0.1% < FRvar < 1.5% respectively. In a large fraction of the sources, Fvar is larger in the V band compared to the R band. From the z-transformed discrete cross-correlation function analysis, we find that there is no lag between the V and R band variations. Adopting the Markov Chain Monte Carlo (MCMC) approach, and properly taking into account the correlation between the errors in colours and magnitudes, it is found that the majority of sources show a bluer when brighter trend, while a minor fraction of quasars show the opposite behaviour. This is similar to the results obtained from another two independent algorithms, namely the weighted linear least squares fit (FITEXY) and the bivariate correlated errors and intrinsic scatter regression (BCES). However, the ordinary least squares (OLS) fit, normally used in the colour variability studies of quasars, indicates that all the quasars studied here show a bluer when brighter trend. It is therefore very clear that the OLS algorithm cannot be used for the study of colour variability in quasars.
Theory of Financial Risk and Derivative Pricing
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2009-01-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Theory of Financial Risk and Derivative Pricing - 2nd Edition
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2003-12-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14
NASA Astrophysics Data System (ADS)
Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.
2000-03-01
Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.
NASA Astrophysics Data System (ADS)
Gukelberger, Jan; Kozik, Evgeny; Hafermann, Hartmut
2017-07-01
The dual fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work, we compute the dual fermion expansion for the two-dimensional Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We benchmark the obtained self-energy against numerically exact diagrammatic determinant Monte Carlo simulations to systematically assess convergence of the dual fermion series and the validity of these approximations. We observe that, from high temperatures down to the vicinity of the DMFT Néel transition, the dual fermion series converges very quickly to the exact solution in the whole range of Hubbard interactions considered (4 ≤U /t ≤12 ), implying that contributions from higher-order vertices are small. As the temperature is lowered further, we observe slower series convergence, convergence to incorrect solutions, and ultimately divergence. This happens in a regime where magnetic correlations become significant. We find, however, that the self-consistent particle-hole ladder approximation yields reasonable and often even highly accurate results in this regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F.
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled inmore » a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and heterogeneous doses. On an AMD 1090T processor, computing times of 38 and 21 sec were required to achieve an average statistical uncertainty of 2% within the prostate (1 x 1 x 1 mm{sup 3}) and breast (0.67 x 0.67 x 0.8 mm{sup 3}) CTVs, respectively. Conclusions: CMC supports an additional average 38-60 fold improvement in average efficiency relative to conventional uncorrelated MC techniques, although some voxels experience no gain or even efficiency losses. However, for the two investigated case studies, the maximum variance within clinically significant structures was always reduced (on average by a factor of 6) in the therapeutic dose range generally. CMC takes only seconds to produce an accurate, high-resolution, low-uncertainly dose distribution for the low-energy PSB implants investigated in this study.« less
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
Drusano, G. L.; Preston, S. L.; Gotfried, M. H.; Danziger, L. H.; Rodvold, K. A.
2002-01-01
Levofloxacin was administered orally to steady state to volunteers randomly in doses of 500 and 750 mg. Plasma and epithelial lining fluid (ELF) samples were obtained at 4, 12, and 24 h after the final dose. All data were comodeled in a population pharmacokinetic analysis employing BigNPEM. Penetration was evaluated from the population mean parameter vector values and from the results of a 1,000-subject Monte Carlo simulation. Evaluation from the population mean values demonstrated a penetration ratio (ELF/plasma) of 1.16. The Monte Carlo simulation provided a measure of dispersion, demonstrating a mean ratio of 3.18, with a median of 1.43 and a 95% confidence interval of 0.14 to 19.1. Population analysis with Monte Carlo simulation provides the best and least-biased estimate of penetration. It also demonstrates clearly that we can expect differences in penetration between patients. This analysis did not deal with inflammation, as it was performed in volunteers. The influence of lung pathology on penetration needs to be examined. PMID:11796385
Badal, Andreu; Badano, Aldo
2009-11-01
It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Strain induced adatom correlations
NASA Astrophysics Data System (ADS)
Kappus, Wolfgang
2012-12-01
A Born-Green-Yvon type model for adatom density correlations is combined with a model for adatom interactions mediated by the strain in elastic anisotropic substrates. The resulting nonlinear integral equation is solved numerically for coverages from zero to a limit given by stability constraints. W, Nb, Ta and Au surfaces are taken as examples to show the effects of different elastic anisotropy regions. Results of the calculation are shown by appropriate plots and discussed. A mapping to superstructures is tried. Corresponding adatom configurations from Monte Carlo simulations are shown.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance
ERIC Educational Resources Information Center
Alsop, Brent
2004-01-01
Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…
Bootstrap Estimation of Sample Statistic Bias in Structural Equation Modeling.
ERIC Educational Resources Information Center
Thompson, Bruce; Fan, Xitao
This study empirically investigated bootstrap bias estimation in the area of structural equation modeling (SEM). Three correctly specified SEM models were used under four different sample size conditions. Monte Carlo experiments were carried out to generate the criteria against which bootstrap bias estimation should be judged. For SEM fit indices,…
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Decomposition of Some Well-Known Variance Reduction Techniques. Revision.
1985-05-01
34use a family of transformatlom to convert given samples into samples conditioned on a given characteristic (p. 04)." Dub and Horowitz (1979), Granovsky ...34Antithetic Varlates Revisited," Commun. ACM 26, 11, 064-971. Granovsky , B.L. (1981), "Optimal Formulae of the Conditional Monte Carlo," SIAM J. Alg
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
Changes of arthropod diversity across an altitudinal ecoregional zonation in Northwestern Argentina
González-Reyes, Andrea X.; Rodriguez-Artigas, Sandra M.
2017-01-01
This study examined arthropod community patterns over an altitudinal ecoregional zonation that extended through three ecoregions (Yungas, Monte de Sierras y Bolsones, and Puna) and two ecotones (Yungas-Monte and Prepuna) of Northwestern Argentina (altitudinal range of 2,500 m), and evaluated the abiotic and biotic factors and the geographical distance that could influence them. Pitfall trap and suction samples were taken seasonally in 15 sampling sites (1,500–4,000 m a.s.l) during one year. In addition to climatic variables, several soil and vegetation variables were measured in the field. Values obtained for species richness between ecoregions and ecotones and by sampling sites were compared statistically and by interpolation–extrapolation analysis based on individuals at the same sample coverage level. Effects of predictor variables and the similarity of arthropods were shown using non-metric multidimensional scaling, and the resulting groups were evaluated using a multi-response permutation procedure. Polynomial regression was used to evaluate the relationship between altitude with total species richness and those of hyperdiverse/abundant higher taxa and the latter taxa with each predictor variable. The species richness pattern displayed a decrease in species diversity as the elevation increased at the bottom wet part (Yungas) of our altitudinal zonation until the Monte, and a unimodal pattern of diversity in the top dry part (Monte, Puna). Each ecoregion and ecotonal zone evidenced a particular species richness and assemblage of arthropods, but the latter ones displayed a high percentage of species shared with the adjacent ecoregions. The arthropod elevational pattern and the changes of the assemblages were explained by the environmental gradient (especially the climate) in addition to a geographic gradient (the distance of decay of similarity), demonstrating that the species turnover is important to explain the beta diversity along the elevational gradient. This suggests that patterns of diversity and distribution of arthropods are regulated by the dissimilarity of ecoregional environments that establish a wide range of geographic and environmental barriers, coupled with a limitation of species dispersal. Therefore, the arthropods of higher taxa respond differently to the altitudinal ecoregional zonation. PMID:29230361