Adaptive importance sampling Monte Carlo simulation of rare transition events.
de Koning, Maurice; Cai, Wei; Sadigh, Babak; Oppelstrup, Tomas; Kalos, Malvin H; Bulatov, Vasily V
2005-02-15
We develop a general theoretical framework for the recently proposed importance sampling method for enhancing the efficiency of rare-event simulations [W. Cai, M. H. Kalos, M. de Koning, and V. V. Bulatov, Phys. Rev. E 66, 046703 (2002)], and discuss practical aspects of its application. We define the success/fail ensemble of all possible successful and failed transition paths of any duration and demonstrate that in this formulation the rare-event problem can be interpreted as a "hit-or-miss" Monte Carlo quadrature calculation of a path integral. The fact that the integrand contributes significantly only for a very tiny fraction of all possible paths then naturally leads to a "standard" importance sampling approach to Monte Carlo (MC) quadrature and the existence of an optimal importance function. In addition to showing that the approach is general and expected to be applicable beyond the realm of Markovian path simulations, for which the method was originally proposed, the formulation reveals a conceptual analogy with the variational MC (VMC) method. The search for the optimal importance function in the former is analogous to finding the ground-state wave function in the latter. In two model problems we discuss practical aspects of finding a suitable approximation for the optimal importance function. For this purpose we follow the strategy that is typically adopted in VMC calculations: the selection of a trial functional form for the optimal importance function, followed by the optimization of its adjustable parameters. The latter is accomplished by means of an adaptive optimization procedure based on a combination of steepest-descent and genetic algorithms.
NASA Astrophysics Data System (ADS)
Liang, Faming; Cheon, Sooyoung
2009-12-01
The problem of simulating from distributions with intractable normalizing constants has received much attention in the recent literature. In this paper, we propose a new MCMC algorithm, the so-called Monte Carlo dynamically weighted importance sampler, for tickling this problem. The new algorithm is illustrated with the spatial autologistic models. The novelty of our algorithm is that it allows for the use of Monte Carlo estimates in MCMC simulations, while still leaving the target distribution invariant under the criterion of dynamically weighted importance sampling. Unlike the auxiliary variable MCMC algorithms, the new algorithm removes the need of perfect sampling, and thus can be applied to a wide range of problems for which perfect sampling is not available or very expensive. The new algorithm can also be used for simulating from the incomplete posterior distribution for the missing data problem.
Improved importance sampling for Monte Carlo simulation of time-domain optical coherence tomography
Lima, Ivan T.; Kalra, Anshul; Sherif, Sherif S.
2011-01-01
We developed an importance sampling based method that significantly speeds up the calculation of the diffusive reflectance due to ballistic and to quasi-ballistic components of photons scattered in turbid media: Class I diffusive reflectance. These components of scattered photons make up the signal in optical coherence tomography (OCT) imaging. We show that the use of this method reduces the computation time of this diffusive reflectance in time-domain OCT by up to three orders of magnitude when compared with standard Monte Carlo simulation. Our method does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This fast Monte Carlo calculation of the Class I diffusive reflectance can be used as a tool to further study the physical process governing OCT signals, e.g., obtain the statistics of the depth-scan, including the effects of multiple scattering of light, in OCT. This is an important prerequisite to future research to increase penetration depth and to improve image extraction in OCT. PMID:21559120
Monte Carlo importance sampling for the MCNP{trademark} general source
Lichtenstein, H.
1996-01-09
Research was performed to develop an importance sampling procedure for a radiation source. The procedure was developed for the MCNP radiation transport code, but the approach itself is general and can be adapted to other Monte Carlo codes. The procedure, as adapted to MCNP, relies entirely on existing MCNP capabilities. It has been tested for very complex descriptions of a general source, in the context of the design of spent-reactor-fuel storage casks. Dramatic improvements in calculation efficiency have been observed in some test cases. In addition, the procedure has been found to provide an acceleration to acceptable convergence, as well as the benefit of quickly identifying user specified variance-reduction in the transport that effects unstable convergence.
NASA Astrophysics Data System (ADS)
Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.
2012-03-01
Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.
Periyasamy, Vijitha; Pramanik, Manojit
2016-04-10
Monte Carlo simulation for light propagation in biological tissue is widely used to study light-tissue interaction. Simulation for optical coherence tomography (OCT) studies requires handling of embedded objects of various shapes. In this work, time-domain OCT simulations for multilayered tissue with embedded objects (such as sphere, cylinder, ellipsoid, and cuboid) was done. Improved importance sampling (IS) was implemented for the proposed OCT simulation for faster speed. At first, IS was validated against standard and angular biased Monte Carlo methods for OCT. Both class I and class II photons were in agreement in all the three methods. However, the IS method had more than tenfold improvement in terms of simulation time. Next, B-scan images were obtained for four types of embedded objects. All the four shapes are clearly visible from the B-scan OCT images. With the improved IS B-scan OCT images of embedded objects can be obtained with reasonable simulation time using a standard desktop computer. User-friendly, C-based, Monte Carlo simulation for tissue layers with embedded objects for OCT (MCEO-OCT) will be very useful for time-domain OCT simulations in many biological applications.
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
DMATIS: Dark Matter ATtenuation Importance Sampling
NASA Astrophysics Data System (ADS)
Mahdawi, Mohammad Shafi; Farrar, Glennys R.
2017-05-01
DMATIS (Dark Matter ATtenuation Importance Sampling) calculates the trajectories of DM particles that propagate in the Earth's crust and the lead shield to reach the DAMIC detector using an importance sampling Monte-Carlo simulation. A detailed Monte-Carlo simulation avoids the deficiencies of the SGED/KS method that uses a mean energy loss description to calculate the lower bound on the DM-proton cross section. The code implementing the importance sampling technique makes the brute-force Monte-Carlo simulation of moderately strongly interacting DM with nucleons computationally feasible. DMATIS is written in Python 3 and MATHEMATICA.
CosmoPMC: Cosmology sampling with Population Monte Carlo
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren
2012-12-01
CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.
Importance sampling : promises and limitations.
West, Nicholas J.; Swiler, Laura Painton
2010-04-01
Importance sampling is an unbiased sampling method used to sample random variables from different densities than originally defined. These importance sampling densities are constructed to pick 'important' values of input random variables to improve the estimation of a statistical response of interest, such as a mean or probability of failure. Conceptually, importance sampling is very attractive: for example one wants to generate more samples in a failure region when estimating failure probabilities. In practice, however, importance sampling can be challenging to implement efficiently, especially in a general framework that will allow solutions for many classes of problems. We are interested in the promises and limitations of importance sampling as applied to computationally expensive finite element simulations which are treated as 'black-box' codes. In this paper, we present a customized importance sampler that is meant to be used after an initial set of Latin Hypercube samples has been taken, to help refine a failure probability estimate. The importance sampling densities are constructed based on kernel density estimators. We examine importance sampling with respect to two main questions: is importance sampling efficient and accurate for situations where we can only afford small numbers of samples? And does importance sampling require the use of surrogate methods to generate a sufficient number of samples so that the importance sampling process does increase the accuracy of the failure probability estimate? We present various case studies to address these questions.
RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO
GOEL, SHARAD; SALGANIK, MATTHEW J.
2013-01-01
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381
A pure-sampling quantum Monte Carlo algorithm
Ospadov, Egor; Rothstein, Stuart M.
2015-01-14
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
A pure-sampling quantum Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Ospadov, Egor; Rothstein, Stuart M.
2015-01-01
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
A pure-sampling quantum Monte Carlo algorithm.
Ospadov, Egor; Rothstein, Stuart M
2015-01-14
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
Respondent-driven sampling as Markov chain Monte Carlo.
Goel, Sharad; Salganik, Matthew J
2009-07-30
Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations
Soran, P.D.; McKeon, D.C.; Booth, T.E.; Schlumberger Well Services, Houston, TX; Los Alamos National Lab., NM )
1989-07-01
Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab.
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures.
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those
Annealed Importance Sampling for Neural Mass Models
Penny, Will; Sengupta, Biswa
2016-01-01
Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
Extending the alias Monte Carlo sampling method to general distributions
Edwards, A.L.; Rathkopf, J.A. ); Smidt, R.K. )
1991-01-07
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs.
Instantaneous GNSS attitude determination: A Monte Carlo sampling approach
NASA Astrophysics Data System (ADS)
Sun, Xiucong; Han, Chao; Chen, Pei
2017-04-01
A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space formore » which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc
Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.
2004-12-01
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Monte Carlo Studies of Sampling Strategies for Estimating Tributary Loads
NASA Astrophysics Data System (ADS)
Richards, R. Peter; Holloway, Jim
1987-10-01
Monte Carlo techniques were used to evaluate the accuracy and precision of tributary load estimates, as these are affected by sampling frequency and pattern, calculation method, watershed size, and parameter behavior during storm runoff events. Simulated years consisting of 1460 observations were chosen at random with replacement from data sets of more than 4000 samples. Patterned subsampling of these simulated years produced data appropriate to each sampling frequency and pattern, from which load estimates were calculated. Thus results for all sampling strategies were based on the same series of simulated years. Sampling frequencies ranged from 12 to roughly 600 samples per year. Unstratified and flow-stratified sampling were examined, and loads were calculated with and without the use of the Beale Ratio Estimator. All loads were evaluated by comparison with loads calculated from all 1460 samples in the simulated year. Studies consisting of 1000 iterations were repeated twice for each of five parameters in each of three watersheds. The results show that bias and precision of loading estimates are affected not only by the frequency and pattern of sampling and the calculation approach used, but also by the watershed size and the behavior of the chemical species being monitored. Furthermore, considerable interaction exists between these factors. In every case, loads based on flow-stratified sampling and calculated using the Beale ratio estimator provided the best results among the strategies examined. Differences in bias and precision among watersheds and among transported materials are related to the variability of instantaneous fluxes in the systems being monitored. These differences are qualitatively predictable from knowledge of the time behavior of the material and hydrological systems involved. Attempts to derive quantitative relationships to predict the sampling effort required to achieve a specified level of precision have not been successful.
Monte Carlo studies of sampling strategies for estimating tributary loads
Richards, R.P.; Holloway, J.
1987-10-01
Monte Carlo techniques were used to evaluate the accuracy and precision of tributary load estimates, as these are affected by sampling frequency and pattern, calculation method, watershed size, and parameter behavior during storm runoff events. Simulated years consisting of 1460 observations were chosen at random with replacement from data sets of more than 4000 samples. Patterned subsampling of these simulated years produced data appropriate to each sampling frequency and pattern, from which load estimates were calculated. Thus, results for all sampling strategies were based on the same series of simulate years. Sampling frequencies ranged from 12 to roughly 600 samples per year. Unstratified and flow-stratified sampling were examined, and loads were calculated with and without the use of the Beale Ratio Estimator. All loads were evaluated by comparison with loads calculated from all 1460 samples in the simulated year. Studies consisting of 1000 iterations were repeated twice for each of five parameters in each of three watersheds. The results show that bias and precision of loading estimates are affected not only by the frequency and pattern of sampling and the calculation approach used, but also by the watershed size and the behavior of the chemical species being monitored. Furthermore, considerable interaction exists between these factors. In every case, loads based on flow-stratified sampling and calculated using the Beale ratio estimator provided the best results among the strategies examined. Differences in bias and precision among watersheds and among transported materials are related to the variability of instantaneous fluxes in the systems being monitored. These differences are qualitatively predictable from knowledge of the time behavior of the material and hydrological systems involved.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Adaptive importance sampling for network growth models
Holmes, Susan P.
2016-01-01
Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098
The importance of microhabitat for biodiversity sampling.
Mehrabi, Zia; Slade, Eleanor M; Solis, Angel; Mann, Darren J
2014-01-01
Responses to microhabitat are often neglected when ecologists sample animal indicator groups. Microhabitats may be particularly influential in non-passive biodiversity sampling methods, such as baited traps or light traps, and for certain taxonomic groups which respond to fine scale environmental variation, such as insects. Here we test the effects of microhabitat on measures of species diversity, guild structure and biomass of dung beetles, a widely used ecological indicator taxon. We demonstrate that choice of trap placement influences dung beetle functional guild structure and species diversity. We found that locally measured environmental variables were unable to fully explain trap-based differences in species diversity metrics or microhabitat specialism of functional guilds. To compare the effects of habitat degradation on biodiversity across multiple sites, sampling protocols must be standardized and scale-relevant. Our work highlights the importance of considering microhabitat scale responses of indicator taxa and designing robust sampling protocols which account for variation in microhabitats during trap placement. We suggest that this can be achieved either through standardization of microhabitat or through better efforts to record relevant environmental variables that can be incorporated into analyses to account for microhabitat effects. This is especially important when rapidly assessing the consequences of human activity on biodiversity loss and associated ecosystem function and services.
The Importance of Microhabitat for Biodiversity Sampling
Mehrabi, Zia; Slade, Eleanor M.; Solis, Angel; Mann, Darren J.
2014-01-01
Responses to microhabitat are often neglected when ecologists sample animal indicator groups. Microhabitats may be particularly influential in non-passive biodiversity sampling methods, such as baited traps or light traps, and for certain taxonomic groups which respond to fine scale environmental variation, such as insects. Here we test the effects of microhabitat on measures of species diversity, guild structure and biomass of dung beetles, a widely used ecological indicator taxon. We demonstrate that choice of trap placement influences dung beetle functional guild structure and species diversity. We found that locally measured environmental variables were unable to fully explain trap-based differences in species diversity metrics or microhabitat specialism of functional guilds. To compare the effects of habitat degradation on biodiversity across multiple sites, sampling protocols must be standardized and scale-relevant. Our work highlights the importance of considering microhabitat scale responses of indicator taxa and designing robust sampling protocols which account for variation in microhabitats during trap placement. We suggest that this can be achieved either through standardization of microhabitat or through better efforts to record relevant environmental variables that can be incorporated into analyses to account for microhabitat effects. This is especially important when rapidly assessing the consequences of human activity on biodiversity loss and associated ecosystem function and services. PMID:25469770
Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; Pablo, Juan J. de
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Semantic Importance Sampling for Statistical Model Checking
2014-10-18
we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with rare events. Our results indicate that SIS reduces...background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In Section 6, we present our experiments and results...Syntactic Extraction ∗( ) dReal + Refinement ∗ |∗| , Monte-Carlo , Fig. 5. Architecture of osmosis
Importance-sampling computation of statistical properties of coupled oscillators
NASA Astrophysics Data System (ADS)
Gupta, Shamik; Leitão, Jorge C.; Altmann, Eduardo G.
2017-07-01
We introduce and implement an importance-sampling Monte Carlo algorithm to study systems of globally coupled oscillators. Our computational method efficiently obtains estimates of the tails of the distribution of various measures of dynamical trajectories corresponding to states occurring with (exponentially) small probabilities. We demonstrate the general validity of our results by applying the method to two contrasting cases: the driven-dissipative Kuramoto model, a paradigm in the study of spontaneous synchronization; and the conservative Hamiltonian mean-field model, a prototypical system of long-range interactions. We present results for the distribution of the finite-time Lyapunov exponent and a time-averaged order parameter. Among other features, our results show most notably that the distributions exhibit a vanishing standard deviation but a skewness that is increasing in magnitude with the number of oscillators, implying that nontrivial asymmetries and states yielding rare or atypical values of the observables persist even for a large number of oscillators.
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
[The importance of the sample design effect].
Guillén, Montserrat; Ayuso, Mercedes
2004-01-01
Sample selection through a complex design influences the subsequent statistical analysis. The different means of sample selection may result in bias and greater variance of estimators; simple randomized sampling is the reference design. Diverse examples are provided, illustrating how the various sampling strategies can result in bias and increase variance. The inclusion of different weighting techniques reduces bias. Evaluation of the effect of design enables measurement of the degree of variance distortion due to the sampling design used and therefore provides a direct evaluation of the alteration in the confidence intervals estimated when the sampling design deviates from simple randomized sampling. We recommend measurement of the effect of the design on analysis of the data obtained by sampling and inclusion of weighting techniques in statistical analyses.
Akhmatskaya, Elena; Fernández-Pendás, Mario; Radivojević, Tijana; Sanz-Serna, J M
2017-08-02
The modified Hamiltonian Monte Carlo (MHMC) methods, i.e., importance sampling methods that use modified Hamiltonians within a Hybrid Monte Carlo (HMC) framework, often outperform in sampling efficiency standard techniques such as molecular dynamics (MD) and HMC. The performance of MHMC may be enhanced further through the rational choice of the simulation parameters and by replacing the standard Verlet integrator with more sophisticated splitting algorithms. Unfortunately, it is not easy to identify the appropriate values of the parameters that appear in those algorithms. We propose a technique, that we call MAIA (Modified Adaptive Integration Approach), which, for a given simulation system and a given time step, automatically selects the optimal integrator within a useful family of two-stage splitting formulas. Extended MAIA (or e-MAIA) is an enhanced version of MAIA, which additionally supplies a value of the method-specific parameter that, for the problem under consideration, keeps the momentum acceptance rate at a user-desired level. The MAIA and e-MAIA algorithms have been implemented, with no computational overhead during simulations, in MultiHMC-GROMACS, a modified version of the popular software package GROMACS. Tests performed on well-known molecular models demonstrate the superiority of the suggested approaches over a range of integrators (both standard and recently developed), as well as their capacity to improve the sampling efficiency of GSHMC, a noticeable method for molecular simulation in the MHMC family. GSHMC combined with e-MAIA shows a remarkably good performance when compared to MD and HMC coupled with the appropriate adaptive integrators.
MORSE Monte Carlo radiation transport code system. [Sample problems
Emmett, M.B.
1984-07-02
For a number of years the MORSE user community has requested additional help in setting up problems using various options. The sample problems distributed with MORSE did not fully demonstrate the capability of the code. At Oak Ridge National Laboratory the code originators had a complete set of sample problems, but funds for documenting and distributing them were never available. Recently the number of requests for listings of input data and results for running some particular option the user was trying to implement has increased to the point where it is not feasible to handle them on an individual basis. Consequently it was decided to package a set of sample problems which illustrates more adequately how to run MORSE. This write-up may be added to Part III of the MORSE report. These sample problems include a combined neutron-gamma case, a neutron only case, a gamma only case, an adjoint case, a fission case, a time-dependent fission case, the collision density case, an XCHEKR run and a PICTUR run.
Importance of sampling frequency when collecting diatoms
NASA Astrophysics Data System (ADS)
Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola
2016-11-01
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.
Importance of sampling frequency when collecting diatoms
Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola
2016-01-01
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency. PMID:27841310
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Liang, Faming; Jin, Ick-Hoon
2013-08-01
Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.
Naglič, Peter; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran
2017-01-01
Analytical expressions for sampling the scattering angle from a phase function in Monte Carlo simulations of light propagation are available only for a limited number of phase functions. Consequently, numerical sampling methods based on tabulated values are often required instead. By using Monte Carlo simulated reflectance, we compare two existing and propose an improved numerical sampling method and show that both the number of the tabulated values and the numerical sampling method significantly influence the accuracy of the simulated reflectance. The provided results and guidelines should serve as a good starting point for conducting computationally efficient Monte Carlo simulations with numerical phase function sampling. PMID:28663872
Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim
2015-08-01
A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site.
Fast sampling in the slow manifold: The momentum-enhanced hybrid Monte Carlo method
NASA Astrophysics Data System (ADS)
Andricioaei, Ioan
2005-03-01
We will present a novel dynamic algorithm, the MEHMC method, which enhances sampling and at the same time yielding correct Boltzmann weighted statistical distributions. The gist of the MEHMC method is to use momentum averaging to identify the slow manifold and bias along this manifold the Maxwell distribution of momenta usually employed in Hybrid Monte Carlo. Several tests and applications are to exemplify the method.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2017-01-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2016-10-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
NASA Astrophysics Data System (ADS)
Pande, S.; Shafiei, M.
2016-12-01
Markov chain Monte Carlo (MCMC) methods have been applied in many hydrologic studies to explore posterior parameter distributions within a Bayesian framework. Accurate estimation of posterior parameter distributions is key to reliably estimate marginal likelihood functions and hence to reliably estimate measures of Bayesian complexity. This paper introduces an alternative to well-known random walk based MCMC samplers. An Adaptive Kernel Density Independence Sampling based Monte Carlo Sampling (A-KISMCS) is proposed. A-KISMCS uses an independence sampler with Metropolis-Hastings (M-H) updates which ensures that candidate observations are drawn independently of the current state of a chain. This ensures efficient exploration of the target distribution. The bandwidth of the kernel density estimator is also adapted online in order to increase its accuracy and ensure fast convergence to a target distribution. The performance of A-KISMCS is tested on one several case studies, including synthetic and real world case studies of hydrological modelling and compared with Differential Evolution Adaptive Metropolis (DREAM-zs), which is fundamentally based on random walk sampling with differential evolution. Results show that while DREAM-zs converges to slightly sharper posterior densities, A-KISMCS is slightly more efficient in tracking the mode of the posteriors.
Modeling N2O Emissions From Temperate Agroecosystems: A Literature Review Using Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Tonitto, C.
2006-12-01
In this work, we model annual N2O flux based on field experiments in temperate agroecosystems reported in the literature. Understanding potential N2O flux as a consequence of ecosystem management is important for mitigating global change. While loss of excess N as N2 has no environmental consequences, loss as N2O contributes to the greenhouse effect; over a 100 year time horizon N2O has 310 times the global warming potential (GWP) of CO2. Nitrogen trace gas flux remains difficult to accurately quantify under field conditions due to temporal and spatial limitations of sampling. Trace gas measurement techniques often rely on small chambers sampled at regular intervals. This measurement scheme can undersample stochastic events, such as high precipitation, which correspond to periods of high N trace gas flux. We apply Monte Carlo sampling of field measurements to project N2O losses under different crops and soil textures. Three statistical models are compared: 1) annual N2O flux as a function of process rates derived from temporally aggregated field observations, 2) annual N2O flux incorporating the probability of precipitation events, and 3) annual N2O flux as a function of crop growth. Using the temporally aggregated model, predicted annual N2O flux was highest for corn and wheat, which receive higher fertilizer inputs relative to barley and ryegrass. Within a cropping system, clayey soil textures resulted in the highest N2O flux. The incorporation of precipitation events in the model has the greatest effect on clayey soils. Relative to the aggregated model the inclusion of precipitation events changed predicted mean annual N2O flux from 31 to 49 kg N ha-1 for corn grown on clay loam and shifted the 75% confidence interval (CI) from 20-42 to 38-61 kg N ha-1. In contrast, comparisons between the aggregated and precipitation event models resulted in indistinguishable predictions of mean annual N2O loss for corn grown on silty loam and loam soils. Similarly, application
NASA Astrophysics Data System (ADS)
Holmes, Jesse Curtis
established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.
On the maximal use of Monte Carlo samples: re-weighting events at NLO accuracy.
Mattelaer, Olivier
2016-01-01
Accurate Monte Carlo simulations for high-energy events at CERN's Large Hadron Collider, are very expensive, both from the computing and storage points of view. We describe a method that allows to consistently re-use parton-level samples accurate up to NLO in QCD under different theoretical hypotheses. We implement it in MadGraph5_aMC@NLO and show its validation by applying it to several cases of practical interest for the search of new physics at the LHC.
NASA Astrophysics Data System (ADS)
Levien, Ethan; Bressloff, Paul C.
2017-10-01
Many biochemical systems appearing in applications have a multiscale structure so that they converge to piecewise deterministic Markov processes in a thermodynamic limit. The statistics of the piecewise deterministic process can be obtained much more efficiently than those of the exact process. We explore the possibility of coupling sample paths of the exact model to the piecewise deterministic process in order to reduce the variance of their difference. We then apply this coupling to reduce the computational complexity of a Monte Carlo estimator. Motivated by the rigorous results in [1], we show how this method can be applied to realistic biological models with nontrivial scalings.
A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Landau, D. P.; Tsai, Shan-Ho; Exler, M.
2004-10-01
We describe a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way. Instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states. The probability can be computed at any temperature by weighting the density of states by the appropriate Boltzmann factor. Thermodynamic properties can be determined from suitable derivatives of the partition function and, unlike "standard" methods, the free energy and entropy can also be computed directly. To demonstrate the simplicity and power of the algorithm, we apply it to models exhibiting first-order or second-order phase transitions.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Markov chain Monte Carlo sampling of gene genealogies conditional on unphased SNP genotype data.
Burkett, Kelly M; McNeney, Brad; Graham, Jinko
2013-10-01
The gene genealogy is a tree describing the ancestral relationships among genes sampled from unrelated individuals. Knowledge of the tree is useful for inference of population-genetic parameters and has potential application in gene-mapping. Markov chain Monte Carlo approaches that sample genealogies conditional on observed genetic data typically assume that haplotype data are observed even though commonly-used genotyping technologies provide only unphased genotype data. We have extended our haplotype-based genealogy sampler, sampletrees, to handle unphased genotype data. We use the sampled haplotype configurations as a diagnostic for adequate sampling of the tree space based on the reasoning that if haplotype sampling is restricted, sampling from the tree space will also be restricted. We compare the distributions of sampled haplotypes across multiple runs of sampletrees, and to those estimated by the phase inference program, PHASE. Performance was excellent for the majority of individuals as shown by the consistency of results across multiple runs. However, for some individuals in some datasets, sampletrees had problems sampling haplotype configurations; longer run lengths would be required for these datasets. For many datasets though, we expect that sampletrees will be useful for sampling from the posterior distribution of gene genealogies given unphased genotype data.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH_{3} to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH_{3} to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.
Optimal sampling efficiency in Monte Carlo sampling with an approximate potential
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.
Multiscale Monte Carlo Sampling of Protein Sidechains: Application to Binding Pocket Flexibility
Nilmeier, Jerome; Jacobson, Matt
2008-01-01
We present a Monte Carlo sidechain sampling procedure and apply it to assessing the flexibility of protein binding pockets. We implemented a multiple “time step” Monte Carlo algorithm to optimize sidechain sampling with a surface generalized Born implicit solvent model. In this approach, certain forces (those due to long-range electrostatics and the implicit solvent model) are updated infrequently, in “outer steps”, while short-range forces (covalent, local nonbonded interactions) are updated at every “inner step”. Two multistep protocols were studied. The first protocol rigorously obeys detailed balance, and the second protocol introduces an approximation to the solvation term that increases the acceptance ratio. The first protocol gives a 10-fold improvement over a protocol that does not use multiple time steps, while the second protocol generates comparable ensembles and gives a 15-fold improvement. A range of 50–200 inner steps per outer step was found to give optimal performance for both protocols. The resultant method is a practical means to assess sidechain flexibility in ligand binding pockets, as we illustrate with proof-of-principle calculations on six proteins: DB3 antibody, thermolysin, estrogen receptor, PPAR-γ, PI3 kinase, and CDK2. The resulting sidechain ensembles of the apo binding sites correlate well with known induced fit conformational changes and provide insights into binding pocket flexibility. PMID:19119325
Gil, Victor A; Lecina, Daniel; Grebner, Christoph; Guallar, Victor
2016-10-15
Normal mode methods are becoming a popular alternative to sample the conformational landscape of proteins. In this study, we describe the implementation of an internal coordinate normal mode analysis method and its application in exploring protein flexibility by using the Monte Carlo method PELE. This new method alternates two different stages, a perturbation of the backbone through the application of torsional normal modes, and a resampling of the side chains. We have evaluated the new approach using two test systems, ubiquitin and c-Src kinase, and the differences to the original ANM method are assessed by comparing both results to reference molecular dynamics simulations. The results suggest that the sampled phase space in the internal coordinate approach is closer to the molecular dynamics phase space than the one coming from a Cartesian coordinate anisotropic network model. In addition, the new method shows a great speedup (∼5-7×), making it a good candidate for future normal mode implementations in Monte Carlo methods.
Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS
Granroth, Garrett E; Chen, Meili; Kohl, James Arthur; Hagen, Mark E; Cobb, John W
2007-01-01
Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersive sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Schumaker, Mark F; Kramer, David M
2011-09-01
We have programmed a Monte Carlo simulation of the Q-cycle model of electron transport in cytochrome b(6)f complex, an enzyme in the photosynthetic pathway that converts sunlight into biologically useful forms of chemical energy. Results were compared with published experiments of Kramer and Crofts (Biochim. Biophys. Acta 1183:72-84, 1993). Rates for the simulation were optimized by constructing large numbers of parameter sets using Latin hypercube sampling and selecting those that gave the minimum mean square deviation from experiment. Multiple copies of the simulation program were run in parallel on a Beowulf cluster. We found that Latin hypercube sampling works well as a method for approximately optimizing very noisy objective functions of 15 or 22 variables. Further, the simplified Q-cycle model can reproduce experimental results in the presence or absence of a quinone reductase (Q(i)) site inhibitor without invoking ad hoc side-reactions.
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
Sequential Importance Sampling for Rare Event Estimation with Computer Experiments
Williams, Brian J.; Picard, Richard R.
2012-06-25
Importance sampling often drastically improves the variance of percentile and quantile estimators of rare events. We propose a sequential strategy for iterative refinement of importance distributions for sampling uncertain inputs to a computer model to estimate quantiles of model output or the probability that the model output exceeds a fixed or random threshold. A framework is introduced for updating a model surrogate to maximize its predictive capability for rare event estimation with sequential importance sampling. Examples of the proposed methodology involving materials strength and nuclear reactor applications will be presented. The conclusions are: (1) Importance sampling improves UQ of percentile and quantile estimates relative to brute force approach; (2) Benefits of importance sampling increase as percentiles become more extreme; (3) Iterative refinement improves importance distributions in relatively few iterations; (4) Surrogates are necessary for slow running codes; (5) Sequential design improves surrogate quality in region of parameter space indicated by importance distributions; and (6) Importance distributions and VRFs stabilize quickly, while quantile estimates may converge slowly.
9 CFR 327.11 - Receipts to importers for import product samples.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Receipts to importers for import product samples. 327.11 Section 327.11 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... AND VOLUNTARY INSPECTION AND CERTIFICATION IMPORTED PRODUCTS § 327.11 Receipts to importers for...
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
Improved algorithms and coupled neutron-photon transport for auto-importance sampling method
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Jun-Li; Wu, Zhen; Qiu, Rui; Li, Chun-Yan; Liang, Man-Chun; Zhang, Hui; Gang, Zhi; Xu, Hong
2017-01-01
The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents improved algorithms for the AIS method, including particle transport, fictitious particle creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applied to complicated deep penetration problems with complex geometry and multiple materials. A Completely coupled Neutron-Photon Auto-Importance Sampling (CNP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of CNP-AIS, geometry splitting with Russian roulette and analog Monte Carlo, respectively. The calculation results of CNP-AIS are in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of CNP-AIS for both neutron and photon is much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo. Supported by the subject of National Science and Technology Major Project of China (2013ZX06002001-007, 2011ZX06004-007) and National Natural Science Foundation of China (11275110, 11375103)
Hoti, Fabian J; Sillanpää, Mikko J; Holmström, Lasse
2002-04-01
We provide an overview of the use of kernel smoothing to summarize the quantitative trait locus posterior distribution from a Markov chain Monte Carlo sample. More traditional distributional summary statistics based on the histogram depend both on the bin width and on the sideway shift of the bin grid used. These factors influence both the overall mapping accuracy and the estimated location of the mode of the distribution. Replacing the histogram by kernel smoothing helps to alleviate these problems. Using simulated data, we performed numerical comparisons between the two approaches. The results clearly illustrate the superiority of the kernel method. The kernel approach is particularly efficient when one needs to point out the best putative quantitative trait locus position on the marker map. In such situations, the smoothness of the posterior estimate is especially important because rough posterior estimates easily produce biased mode estimates. Different kernel implementations are available from Rolf Nevanlinna Institute's web page (http://www.rni.helsinki.fi/;fjh).
Source of statistical noises in the Monte Carlo sampling techniques for coherently scattered photons
Muhammad, Wazir; Lee, Sang Hoon
2013-01-01
Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. ) over squared momentum transfer (). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the data tables. PMID:22984278
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Accelerated Nonrigid Intensity-Based Image Registration Using Importance Sampling
Bhagalia, Roshni; Fessler, Jeffrey A.; Kim, Boklye
2015-01-01
Nonrigid image registration methods using intensity-based similarity metrics are becoming increasingly common tools to estimate many types of deformations. Nonrigid warps can be very flexible with a large number of parameters and gradient optimization schemes are widely used to estimate them. However for large datasets, the computation of the gradient of the similarity metric with respect to these many parameters becomes very time consuming. Using a small random subset of image voxels to approximate the gradient can reduce computation time. This work focuses on the use of importance sampling to improve accuracy and reduce the variance of this gradient approximation. The proposed importance sampling framework is based on an edge-dependent adaptive sampling distribution designed for use with intensity-based registration algorithms. We compare the performance of registration based on stochastic approximations with and without importance sampling to that using deterministic gradient descent. Empirical results, on simulated MR brain data and real CT inhale-exhale lung data from 8 subjects, show that a combination of stochastic approximation methods and importance sampling improves the rate of convergence of the registration process while preserving accuracy. PMID:19211343
Accelerated nonrigid intensity-based image registration using importance sampling.
Bhagalia, Roshni; Fessler, Jeffrey A; Kim, Boklye
2009-08-01
Nonrigid image registration methods using intensity-based similarity metrics are becoming increasingly common tools to estimate many types of deformations. Nonrigid warps can be very flexible with a large number of parameters and gradient optimization schemes are widely used to estimate them. However, for large datasets, the computation of the gradient of the similarity metric with respect to these many parameters becomes very time consuming. Using a small random subset of image voxels to approximate the gradient can reduce computation time. This work focuses on the use of importance sampling to reduce the variance of this gradient approximation. The proposed importance sampling framework is based on an edge-dependent adaptive sampling distribution designed for use with intensity-based registration algorithms. We compare the performance of registration based on stochastic approximations with and without importance sampling to that using deterministic gradient descent. Empirical results, on simulated magnetic resonance brain data and real computed tomography inhale-exhale lung data from eight subjects, show that a combination of stochastic approximation methods and importance sampling accelerates the registration process while preserving accuracy.
Baba, Justin S; John, Dwayne O; Koju, Vijay
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Monte Carlo non-local means: random sampling for large-scale image filtering.
Chan, Stanley H; Zickler, Todd; Lu, Yue M
2014-08-01
We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Sample size estimation for pilot animal experiments by using a Markov Chain Monte Carlo approach.
Allgoewer, Andreas; Mayer, Benjamin
2017-05-01
The statistical determination of sample size is mandatory when planning animal experiments, but it is usually difficult to implement appropriately. The main reason is that prior information is hardly ever available, so the assumptions made cannot be verified reliably. This is especially true for pilot experiments. Statistical simulation might help in these situations. We used a Markov Chain Monte Carlo (MCMC) approach to verify the pragmatic assumptions made on different distribution parameters used for power and sample size calculations in animal experiments. Binomial and normal distributions, which are the most frequent distributions in practice, were simulated for categorical and continuous endpoints, respectively. The simulations showed that the common practice of using five or six animals per group for continuous endpoints is reasonable. Even in the case of small effect sizes, the statistical power would be sufficiently large (≥ 80%). For categorical outcomes, group sizes should never be under eight animals, otherwise a sufficient statistical power cannot be guaranteed. This applies even in the case of large effects. The MCMC approach demonstrated to be a useful method for calculating sample size in animal studies that lack prior data. Of course, the simulation results particularly depend on the assumptions made with regard to the distributional properties and effects to be detected, but the same also holds in situations where prior data are available. MCMC is therefore a promising approach toward the more informed planning of pilot research experiments involving the use of animals. 2017 FRAME.
Adaptive importance sampling of random walks on continuous state spaces
Baggerly, K.; Cox, D.; Picard, R.
1998-11-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.
Large Deviations and Importance Sampling for Systems of Slow-Fast Motion
Spiliopoulos, Konstantinos
2013-02-15
In this paper we develop the large deviations principle and a rigorous mathematical framework for asymptotically efficient importance sampling schemes for general, fully dependent systems of stochastic differential equations of slow and fast motion with small noise in the slow component. We assume periodicity with respect to the fast component. Depending on the interaction of the fast scale with the smallness of the noise, we get different behavior. We examine how one range of interaction differs from the other one both for the large deviations and for the importance sampling. We use the large deviations results to identify asymptotically optimal importance sampling schemes in each case. Standard Monte Carlo schemes perform poorly in the small noise limit. In the presence of multiscale aspects one faces additional difficulties and straightforward adaptation of importance sampling schemes for standard small noise diffusions will not produce efficient schemes. It turns out that one has to consider the so called cell problem from the homogenization theory for Hamilton-Jacobi-Bellman equations in order to guarantee asymptotic optimality. We use stochastic control arguments.
Muhammad, Wazir; Lee, Sang Hoon
2013-01-01
Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. A(Z, v(i)²)) over squared momentum transfer (v(i)² = v(1)²,......, v(59)²). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of A(Z, v(i)²) and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the A(Z, v(i)²) data tables.
Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations
Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi
2016-01-01
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775
Computing ensembles of transitions from stable states: Dynamic importance sampling.
Perilla, Juan R; Beckstein, Oliver; Denning, Elizabeth J; Woolf, Thomas B
2011-01-30
There is an increasing dataset of solved biomolecular structures in more than one conformation and increasing evidence that large-scale conformational change is critical for biomolecular function. In this article, we present our implementation of a dynamic importance sampling (DIMS) algorithm that is directed toward improving our understanding of important intermediate states between experimentally defined starting and ending points. This complements traditional molecular dynamics methods where most of the sampling time is spent in the stable free energy wells defined by these initial and final points. As such, the algorithm creates a candidate set of transitions that provide insights for the much slower and probably most important, functionally relevant degrees of freedom. The method is implemented in the program CHARMM and is tested on six systems of growing size and complexity. These systems, the folding of Protein A and of Protein G, the conformational changes in the calcium sensor S100A6, the glucose-galactose-binding protein, maltodextrin, and lactoferrin, are also compared against other approaches that have been suggested in the literature. The results suggest good sampling on a diverse set of intermediates for all six systems with an ability to control the bias and thus to sample distributions of trajectories for the analysis of intermediate states.
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
NASA Astrophysics Data System (ADS)
Bote, D.; Llovet, X.; Salvat, F.
2008-05-01
We describe systematic Monte Carlo (MC) calculations of characteristic K and L x-ray emission from thick samples bombarded by kiloelectronvolt electrons. The simulations were performed using the general-purpose MC code PENELOPE, which was modified by introducing a new database of electron-impact ionization cross sections calculated from the distorted-wave Born approximation (DWBA). The calculated yields, defined as the number of photons emerging from the target per unit solid angle and per incident electron, are compared with experimental measurements available from the literature, which pertain to single-element materials with atomic numbers ranging from Z = 6 up to Z = 82 and electron beam energies from a few kiloelectronvolts up to 40 keV. To reveal the dependence of the characteristic x-ray yields on the adopted ionization cross sections, simulations were also performed using cross sections based on the plane-wave Born approximation (PWBA). Our calculations confirm that, in the considered energy range, the DWBA is considerably more accurate than the PWBA.
Petaccia, M; Segui, S; Castellano, G
2016-11-01
Fluorescence enhancement in samples irradiated in a scanning electron microscope or an electron microprobe should be appropriately assessed in order not to distort quantitative analyses. Several models have been proposed to take into account this effect and current quantification routines are based on them, many of which have been developed under the assumption that bremsstrahlung fluorescence correction is negligible when compared to characteristic enhancement; however, no concluding arguments have been provided in order to support this assumption. As detectors are unable to discriminate primary from secondary characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, bremsstrahlung fluorescence enhancement in electron probe microanalysis has been studied by using the interaction forcing routine offered by penelope 2008 as a variance reduction alternative. The developed software allowed us to show that bremsstrahlung and characteristic fluorescence corrections are in fact comparable in the studied cases. As an extra result, the interaction forcing approach appears as a most efficient method, not only in the computation of the continuum enhancement but also for the assessment of the characteristic fluorescence correction.
Performance evaluation of an importance sampling technique in a Jackson network
NASA Astrophysics Data System (ADS)
brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed
2014-03-01
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.
Ganzenmüller, Georg; Pawłowski, Grzegorz
2008-09-01
We present a general and unifying framework for deriving Monte Carlo acceptance rules which facilitate flat histogram sampling. The framework yields uniform sampling rules for thermodynamic states given by the mechanically extensive variables appearing in the Hamiltonian. Likewise, Monte Carlo schemes which uniformly sample the thermodynamic fields that are conjugate to the mechanical variables can be derived within this framework. We apply these different, yet equivalent sampling schemes to the extended Hubbard model in the atomic limit with explicit electron spin. Results for the full density-of-states, the charge-order parameter distribution, and phase diagrams for different ratios of the on-site Hubbard repulsion and the intersite interaction are presented. A tricritical point at half-filling of the lattice is located using finite-size scaling techniques.
Kanick, S C; Robinson, D J; Sterenborg, H J C M; Amelink, A
2009-11-21
Single fiber reflectance spectroscopy is a method to noninvasively quantitate tissue absorption and scattering properties. This study utilizes a Monte Carlo (MC) model to investigate the effect that optical properties have on the propagation of photons that are collected during the single fiber reflectance measurement. MC model estimates of the single fiber photon path length (L(SF)) show excellent agreement with experimental measurements and predictions of a mathematical model over a wide range of optical properties and fiber diameters. Simulation results show that L(SF) is unaffected by changes in anisotropy (g epsilon [0.8, 0.9, 0.95]), but is sensitive to changes in phase function (Henyey-Greenstein versus modified Henyey-Greenstein). A 20% decrease in L(SF) was observed for the modified Henyey-Greenstein compared with the Henyey-Greenstein phase function; an effect that is independent of optical properties and fiber diameter and is approximated with a simple linear offset. The MC model also returns depth-resolved absorption profiles that are used to estimate the mean sampling depth (Z(SF)) of the single fiber reflectance measurement. Simulated data are used to define a novel mathematical expression for Z(SF) that is expressed in terms of optical properties, fiber diameter and L(SF). The model of sampling depth indicates that the single fiber reflectance measurement is dominated by shallow scattering events, even for large fibers; a result that suggests that the utility of single fiber reflectance measurements of tissue in vivo will be in the quantification of the optical properties of superficial tissues.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Binder, Kurt
2012-01-01
Semiflexible macromolecules in dilute solution under very good solvent conditions are modeled by self-avoiding walks on the simple cubic lattice (d = 3 dimensions) and square lattice (d = 2 dimensions), varying chain stiffness by an energy penalty ɛb for chain bending. In the absence of excluded volume interactions, the persistence length ℓp of the polymers would then simply be ℓ _p=ℓ _b(2d-2)^{-1}q_b^{-1} with qb = exp (-ɛb/kBT), the bond length ℓb being the lattice spacing, and kBT is the thermal energy. Using Monte Carlo simulations applying the pruned-enriched Rosenbluth method (PERM), both qb and the chain length N are varied over a wide range (0.005 ⩽ qb ⩽ 1, N ⩽ 50 000), and also a stretching force f is applied to one chain end (fixing the other end at the origin). In the absence of this force, in d = 2 a single crossover from rod-like behavior (for contour lengths less than ℓp) to swollen coils occurs, invalidating the Kratky-Porod model, while in d = 3 a double crossover occurs, from rods to Gaussian coils (as implied by the Kratky-Porod model) and then to coils that are swollen due to the excluded volume interaction. If the stretching force is applied, excluded volume interactions matter for the force versus extension relation irrespective of chain stiffness in d = 2, while theories based on the Kratky-Porod model are found to work in d = 3 for stiff chains in an intermediate regime of chain extensions. While for qb ≪ 1 in this model a persistence length can be estimated from the initial decay of bond-orientational correlations, it is argued that this is not possible for more complex wormlike chains (e.g., bottle-brush polymers). Consequences for the proper interpretation of experiments are briefly discussed.
Mittal, Anuradha; Lyle, Nicholas; Harmon, Tyler S; Pappu, Rohit V
2014-08-12
There is growing interest in the topic of intrinsically disordered proteins (IDPs). Atomistic Metropolis Monte Carlo (MMC) simulations based on novel implicit solvation models have yielded useful insights regarding sequence-ensemble relationships for IDPs modeled as autonomous units. However, a majority of naturally occurring IDPs are tethered to ordered domains. Tethering introduces additional energy scales and this creates the challenge of broken ergodicity for standard MMC sampling or molecular dynamics that cannot be readily alleviated by using generalized tempering methods. We have designed, deployed, and tested our adaptation of the Nested Markov Chain Monte Carlo sampling algorithm. We refer to our adaptation as Hamiltonian Switch Metropolis Monte Carlo (HS-MMC) sampling. In this method, transitions out of energetic traps are enabled by the introduction of an auxiliary Markov chain that draws conformations for the disordered region from a Boltzmann distribution that is governed by an alternative potential function that only includes short-range steric repulsions and conformational restraints on the ordered domain. We show using multiple, independent runs that the HS-MMC method yields conformational distributions that have similar and reproducible statistical properties, which is in direct contrast to standard MMC for equivalent amounts of sampling. The method is efficient and can be deployed for simulations of a range of biologically relevant disordered regions that are tethered to ordered domains.
Monte Carlo entropic sampling applied to Ising-like model for 2D and 3D systems
NASA Astrophysics Data System (ADS)
Jureschi, C. M.; Linares, J.; Dahoo, P. R.; Alayli, Y.
2016-08-01
In this paper we present the Monte Carlo entropic sampling (MCES) applied to an Ising-like model for 2D and 3D system in order to show the interaction influence of the edge molecules of the system with their local environment. We show that, as for the 1D and the 2D spin crossover (SCO) systems, the origin of multi steps transition in 3D SCO is the effect of the edge interaction molecules with its local environment together with short and long range interactions. Another important result worth noting is the co-existence of step transitions with hysteresis and without hysteresis. By increasing the value of the edge interaction, L, the transition is shifted to the lower temperatures: it means that the role of edge interaction is equivalent to an applied negative pressure because the edge interaction favours the HS state while the applied pressure favours the LS state. We also analyse, in this contribution, the role of the short- and long-range interaction, J respectively G, with respect to the environment interaction, L.
An importance sampling algorithm for estimating extremes of perpetuity sequences
NASA Astrophysics Data System (ADS)
Collamore, Jeffrey F.
2012-09-01
In a wide class of problems in insurance and financial mathematics, it is of interest to study the extremal events of a perpetuity sequence. This paper addresses the problem of numerically evaluating these rare event probabilities. Specifically, an importance sampling algorithm is described which is efficient in the sense that it exhibits bounded relative error, and which is optimal in an appropriate asymptotic sense. The main idea of the algorithm is to use a "dual" change of measure, which is employed to an associated Markov chain over a randomly-stopped time interval. The algorithm also makes use of the so-called forward sequences generated to the given stochastic recursion, together with elements of Markov chain theory.
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
Li, Longhai; Feng, Cindy X; Qiu, Shi
2017-06-30
An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Perez, Danny; Junghans, Christoph
2014-03-01
We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.
The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques
ERIC Educational Resources Information Center
Menil, Violeta C.
2005-01-01
In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags are...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags are...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
Sampson, Andrew; Le, Yi; Williamson, Jeffrey F.
2012-01-01
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, ΔD, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 125I seeds. The breast case consisted of 87 Model-200 103Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D90, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 × 1 × 1 mm3 dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and heterogeneous doses
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the
Monte Carlo simulation of a beta particle detector for food samples.
Sato, Y; Takahashi, H; Yamada, T; Unno, Y; Yunoki, A
2013-11-01
The accident at the Fukushima Daiichi Nuclear Power Plant in March 2011 released radionuclides into the environment. There is concern that (90)Sr will be concentrated in seafood. To measure the activities of (90)Sr in a short time without chemical processes, we have designed a new detector for measuring activity that obtains count rates using 10 layers of proportional counters that are separated by walls that absorb beta particles. Monte Carlo simulations were performed to confirm that its design is appropriate.
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Sperandio, Olivier; Souaille, Marc; Delfaud, François; Miteva, Maria A; Villoutreix, Bruno O
2009-04-01
Obtaining an efficient sampling of the low to medium energy regions of a ligand conformational space is of primary importance for getting insight into relevant binding modes of drug candidates, or for the screening of rigid molecular entities on the basis of a predefined pharmacophore or for rigid body docking. Here, we report the development of a new computer tool that samples the conformational space by using the Metropolis Monte Carlo algorithm combined with the MMFF94 van der Waals energy term. The performances of the program have been assessed on 86 drug-like molecules that resulted from an ADME/tox profiling applied on cocrystalized small molecules and were compared with the program Omega on the same dataset. Our program has also been assessed on the 85 molecules of the Astex diverse set. Both test sets show convincing performance of our program at sampling the conformational space.
Kurtz, R.J.; Heasler, P.G.; Baird, D.B.
1994-02-01
This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-08-21
An optimized variant of the nested Markov chain Monte Carlo [n(MC)(2)] method [J. Chem. Phys. 130, 164104 (2009)] is applied to fluid N(2). In this implementation of n(MC)(2), isothermal-isobaric (NPT) ensemble sampling on the basis of a pair potential (the "reference" system) is used to enhance the efficiency of sampling based on Perdew-Burke-Ernzerhof density functional theory with a 6-31G(*) basis set (PBE6-31G(*), the "full" system). A long sequence of Monte Carlo steps taken in the reference system is converted into a trial step taken in the full system; for a good choice of reference potential, these trial steps have a high probability of acceptance. Using decorrelated samples drawn from the reference distribution, the pressure and temperature of the full system are varied such that its distribution overlaps maximally with that of the reference system. Optimized pressures and temperatures then serve as input parameters for n(MC)(2) sampling of dense fluid N(2) over a wide range of thermodynamic conditions. The simulation results are combined to construct the Hugoniot of nitrogen fluid, yielding predictions in excellent agreement with experiment.
Monte Carlo calculations of the energy deposited in biological samples and shielding materials
NASA Astrophysics Data System (ADS)
Akar Tarim, U.; Gurler, O.; Ozmutlu, E. N.; Yalcin, S.
2014-03-01
The energy deposited by gamma radiation from the Cs-137 isotope into body tissues (bone and muscle), tissue-like medium (water), and radiation shielding materials (concrete, lead, and water), which is of interest for radiation dosimetry, was obtained using a simple Monte Carlo algorithm. The algorithm also provides a realistic picture of the distribution of backscattered photons from the target and the distribution of photons scattered forward after several scatterings in the scatterer, which is useful in studying radiation shielding. The presented method in this work constitutes an attempt to evaluate the amount of energy absorbed by body tissues and shielding materials.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek; Turos, Prof. Andrzej; Nowicki, Lech; Jozwik, P.; Shutthanandan, Vaithiyalingam; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik-Biala, Iwona
2012-01-01
The aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Monte Carlo simulations of channeling spectra recorded for samples containing complex defects
Jagielski, Jacek K.; Turos, Andrzej W.; Nowicki, L.; Jozwik, Przemyslaw A.; Shutthanandan, V.; Zhang, Yanwen; Sathish, N.; Thome, Lionel; Stonert, A.; Jozwik Biala, Iwona
2012-02-15
The main aim of the present paper is to describe the current status of the development of McChasy, a Monte Carlo simulation code, to make it suitable for the analysis of dislocations and dislocation loops in crystals. Such factors like the shape of the bent channel and geometrical distortions of the crystalline structure in the vicinity of dislocation has been discussed. Several examples of the analysis performed at different energies of analyzing ions are presented. The results obtained demonstrate that the new procedure applied to the spectra recorded on crystals containing dislocation yields damage profiles which are independent of the energy of the analyzing beam.
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach
NASA Astrophysics Data System (ADS)
Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume
2016-03-01
Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements for importers who import gasoline into the United States by truck. 80.1349 Section 80.1349... FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1349 Alternative sampling and testing requirements for importers who import gasoline into the United States by...
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Morera-Gómez, Yasser; Cartas-Aguila, Héctor A; Alonso-Hernández, Carlos M; Bernal-Castillo, Jose L; Guillén-Arruebarrena, Aniel
2015-03-01
Monte Carlo efficiency transfer method was used to determine the full energy peak efficiency of a coaxial n-type HPGe detector. The efficiencies calibration curves for three Certificate Reference Materials were determined by efficiency transfer using a (152)Eu reference source. The efficiency values obtained after efficiency transfer were used to calculate the activity concentration of the radionuclides detected in the three materials, which were measured in a low-background gamma spectrometry system. Reported and calculated activity concentration show a good agreement with mean deviations of 5%, which is satisfactory for environmental samples measurement. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2014-01-01
Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues). PMID:24763317
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Carlsson, Gudrun Alm; Williamson, Jeffrey; Malusek, Alexandr
2011-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. PMID:21992844
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
NASA Astrophysics Data System (ADS)
Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.
2007-07-01
The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2013-10-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures.
MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.
Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G
2012-11-05
Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.
MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling
2012-01-01
Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base. PMID:23126469
Code of Federal Regulations, 2014 CFR
2014-07-01
... refiners, gasoline importers and producers and importers of certified ethanol denaturant. 80.1630 Section... refiners, gasoline importers and producers and importers of certified ethanol denaturant. (a) Sample and test each batch of gasoline and certified ethanol denaturant. (1) Refiners and importers shall...
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Mora, Leonor; Martínez, Indira; Figuera, Lourdes; Segura, Merlyn; Del Valle, Guilarte
2010-12-01
In Sucre state, the Manzanares river is threatened by domestic, agricultural and industrial activities, becoming an environmental risk factor for its inhabitants. In this sense, the presence of protozoans in superficial waters of tributaries of the Manzanares river (Orinoco river, Quebrada Seca, San Juan river), Montes municipality, Sucre state, as well as the analysis of faecal samples from inhabitants of towns bordering these tributaries were evaluated. We collected faecal and water samples from may 2006 through april 2007. The superficial water samples were processed after centrifugation by the direct examination and floculation, using lugol, modified Kinyoun and trichromic colorations. Fecal samples where analyzed by direct examination with physiological saline solution and the modified Ritchie concentration method and using the other colorations techniques above mentioned. The most frequently observed protozoans in superficial waters in the three tributaries were: Amoebas, Blastocystis sp, Endolimax sp., Chilomastix sp. and Giardia sp. Whereas in faecal samples, Blastocystis hominis, Endolimax nana and Entaomeba coli had the greatest frequencies in the three communities. The inhabitants of Orinoco La Peña turned out to be most susceptible to these parasitic infections (77.60%), followed by San Juan River (46.63%) and Quebrada Seca (39.49%). The presence of pathogenic and nonpathogenic protozoans in superficial waters demonstrates the faecal contamination of the tributaries, representing a constant focus of infection for their inhabitants, inferred by the observation of the same species in both types of samples.
Sampling technique is important for optimal isolation of pharyngeal gonorrhoea.
Mitchell, M; Rane, V; Fairley, C K; Whiley, D M; Bradshaw, C S; Bissessor, M; Chen, M Y
2013-11-01
Culture is insensitive for the detection of pharyngeal gonorrhoea but isolation is pivotal to antimicrobial resistance surveillance. The aim of this study was to ascertain whether recommendations provided to clinicians (doctors and nurses) on pharyngeal swabbing technique could improve gonorrhoea detection rates and to determine which aspects of swabbing technique are important for optimal isolation. This study was undertaken at the Melbourne Sexual Health Centre, Australia. Detection rates among clinicians for pharyngeal gonorrhoea were compared before (June 2006-May 2009) and after (June 2009-June 2012) recommendations on swabbing technique were provided. Associations between detection rates and reported swabbing technique obtained via a clinician questionnaire were examined. The overall yield from testing before and after provision of the recommendations among 28 clinicians was 1.6% (134/8586) and 1.8% (264/15,046) respectively (p=0.17). Significantly higher detection rates were seen following the recommendations among clinicians who reported a change in their swabbing technique in response to the recommendations (2.1% vs. 1.5%; p=0.004), swabbing a larger surface area (2.0% vs. 1.5%; p=0.02), applying more swab pressure (2.5% vs. 1.5%; p<0.001) and a change in the anatomical sites they swabbed (2.2% vs. 1.5%; p=0.002). The predominant change in sites swabbed was an increase in swabbing of the oropharynx: from a median of 0% to 80% of the time. More thorough swabbing improves the isolation of pharyngeal gonorrhoea using culture. Clinicians should receive training to ensure swabbing is performed with sufficient pressure and that it covers an adequate area that includes the oropharynx.
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2010-01-01
Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results. PMID:20811587
Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study
ERIC Educational Resources Information Center
Yurdugul, Halil
2008-01-01
The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…
Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study
ERIC Educational Resources Information Center
Yurdugul, Halil
2008-01-01
The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
Alrefae, T
2014-12-01
A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zhang, Xiaofeng; Badea, Cristian; Hood, Greg; Wetzel, Arthur; Qi, Yi; Stiles, Joel; Johnson, G. Allan
2011-01-01
We present a method for high-resolution reconstruction of fluorescent images of the mouse thorax. It features an anatomically guided sampling method to retrospectively eliminate problematic data and a parallel Monte Carlo software package to compute the Jacobian matrix for the inverse problem. The proposed method was capable of resolving microliter-sized femtomole amount of quantum dot inclusions closely located in the middle of the mouse thorax. The reconstruction was verified against co-registered micro-CT data. Using the proposed method, the new system achieved significantly higher resolution and sensitivity compared to our previous system consisting of the same hardware. This method can be applied to any system utilizing similar imaging principles to improve imaging performance. PMID:21991539
NASA Astrophysics Data System (ADS)
Subramanian, Ramachandran; Schultz, Andrew J.; Kofke, David A.
2017-03-01
We develop an orientation sampling algorithm for rigid diatomic molecules, which allows direct generation of rings of images used for path-integral calculation of nuclear quantum effects. The algorithm treats the diatomic molecule as two independent atoms as opposed to one (quantum) rigid rotor. Configurations are generated according to a solvable approximate distribution that is corrected via the acceptance decision of the Monte Carlo trial. Unlike alternative methods that treat the systems as a quantum rotor, this atom-based approach is better suited for generalization to multi-atomic (more than two atoms) and flexible molecules. We have applied this algorithm in combination with some of the latest ab initio potentials of rigid H2 to compute fully quantum second virial coefficients, for which we observe excellent agreement with both experimental and simulation data from the literature.
NASA Astrophysics Data System (ADS)
Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.
2015-08-01
In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang Landau sampling
Li, Ying Wai; Vogel, Thomas; Wuest, Thomas; Landau, David P
2014-01-01
We introduce a generic, parallel Wang Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.
A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Vogel, Thomas; Wüst, Thomas; Landau, David P.
2014-05-01
We introduce a generic, parallel Wang-Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang-Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.
Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-04-28
Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.
2008-02-01
In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation
Tundisi, J G; Matsumura-Tundisi, T; Tundisi, J E M; Faria, C R L; Abe, D S; Blanco, F; Rodrigues Filho, J; Campanelli, L; Sidagis Galli, C; Teixeira-Silva, V; Degani, R; Soares, F S; Gatti Junior, P
2015-08-01
In this paper the authors describe the limnological approaches, the sampling methodology, and strategy adopted in the study of the Xingu River in the area of influence of future Belo Monte Power Plant. The river ecosystems are characterized by unidirectional current, highly variable in time depending on the climatic situation the drainage pattern an hydrological cycle. Continuous vertical mixing with currents and turbulence, are characteristic of these ecosystems. All these basic mechanisms were taken into consideration in the sampling strategy and field work carried out in the Xingu River Basin, upstream and downstream the future Belo Monte Power Plant Units.
Fallahpoor, Maryam; Abbasi, Mehrshad; Asghar Parach, Ali; Kalantari, Faraz
2017-02-28
Using digital phantoms as an atlas compared to acquiring CT data for internal radionuclide dosimetry decreases patient overall radiation dose and reduces the required analysis effort and time for organ segmentation. The drawback is that the phantom may not match exactly with the patient. We assessed the effect of varying BMIs on dosimetry results for a bone pain palliation agent, (153)Sm-EDTMP. The simulation was done using the GATE Monte Carlo code. Female XCAT phantoms with the following different BMIs were employed: 18.6, 20.8, 22.1, 26.8, 30.3 and 34.7kg/m(2). S-factors (mGy/MBq.s) and SAFs (kg(-1)) were calculated for the dosimetry of the radiation from major source organs including spine, ribs, kidney and bladder into different target organs as well as whole body dosimetry from spine. The differences in dose estimates from different phantoms compared to those from the phantom with BMI of 26.8kg/m(2) as the reference, were calculated for both gamma and beta radiations. The relative differences (RD) of the S-factors or SAFs from the values of reference phantom were calculated. RDs greater than 10% and 100% were frequent in radiations to organs for photon and beta particles, respectively. The relative differences in whole body SAFs from the reference phantom were 15.4%, 7%, 4.2%, -9.8% and -1.4% for BMIs of 18.6, 20.8, 22.1, 30.3 and 34.7kg/m(2), respectively. The differences in whole body S-factors for the phantoms with BMIs of 18.6, 20.8, 22.1, 30.3 and 34.7kg/m(2) were 39.5%, 19.4%, 8.8%, -7.9% and -4.3%, respectively. The dosimetry of the gamma photons and beta particles changes substantially with the use of phantoms with different BMIs. The change in S-factors is important for dose calculation and can change the prescribed therapeutic dose of (153)Sm-EDTMP. Thus a phantom with BMI better matched to the patient is suggested for therapeutic purposes where dose estimates closer to those in the actual patient are required.
Zhang, Jian; Nielsen, Scott E; Grainger, Tess N; Kohler, Monica; Chipchar, Tim; Farr, Daniel R
2014-01-01
Documenting and estimating species richness at regional or landscape scales has been a major emphasis for conservation efforts, as well as for the development and testing of evolutionary and ecological theory. Rarely, however, are sampling efforts assessed on how they affect detection and estimates of species richness and rarity. In this study, vascular plant richness was sampled in 356 quarter hectare time-unlimited survey plots in the boreal region of northeast Alberta. These surveys consisted of 15,856 observations of 499 vascular plant species (97 considered to be regionally rare) collected by 12 observers over a 2 year period. Average survey time for each quarter-hectare plot was 82 minutes, ranging from 20 to 194 minutes, with a positive relationship between total survey time and total plant richness. When survey time was limited to a 20-minute search, as in other Alberta biodiversity methods, 61 species were missed. Extending the survey time to 60 minutes, reduced the number of missed species to 20, while a 90-minute cut-off time resulted in the loss of 8 species. When surveys were separated by habitat type, 60 minutes of search effort sampled nearly 90% of total observed richness for all habitats. Relative to rare species, time-unlimited surveys had ∼ 65% higher rare plant detections post-20 minutes than during the first 20 minutes of the survey. Although exhaustive sampling was attempted, observer bias was noted among observers when a subsample of plots was re-surveyed by different observers. Our findings suggest that sampling time, combined with sample size and observer effects, should be considered in landscape-scale plant biodiversity surveys.
Zhang, Jian; Nielsen, Scott E.; Grainger, Tess N.; Kohler, Monica; Chipchar, Tim; Farr, Daniel R.
2014-01-01
Documenting and estimating species richness at regional or landscape scales has been a major emphasis for conservation efforts, as well as for the development and testing of evolutionary and ecological theory. Rarely, however, are sampling efforts assessed on how they affect detection and estimates of species richness and rarity. In this study, vascular plant richness was sampled in 356 quarter hectare time-unlimited survey plots in the boreal region of northeast Alberta. These surveys consisted of 15,856 observations of 499 vascular plant species (97 considered to be regionally rare) collected by 12 observers over a 2 year period. Average survey time for each quarter-hectare plot was 82 minutes, ranging from 20 to 194 minutes, with a positive relationship between total survey time and total plant richness. When survey time was limited to a 20-minute search, as in other Alberta biodiversity methods, 61 species were missed. Extending the survey time to 60 minutes, reduced the number of missed species to 20, while a 90-minute cut-off time resulted in the loss of 8 species. When surveys were separated by habitat type, 60 minutes of search effort sampled nearly 90% of total observed richness for all habitats. Relative to rare species, time-unlimited surveys had ∼65% higher rare plant detections post-20 minutes than during the first 20 minutes of the survey. Although exhaustive sampling was attempted, observer bias was noted among observers when a subsample of plots was re-surveyed by different observers. Our findings suggest that sampling time, combined with sample size and observer effects, should be considered in landscape-scale plant biodiversity surveys. PMID:24740179
Hierarchical Bayesian modeling and Markov chain Monte Carlo sampling for tuning-curve analysis.
Cronin, Beau; Stevenson, Ian H; Sur, Mriganka; Körding, Konrad P
2010-01-01
A central theme of systems neuroscience is to characterize the tuning of neural responses to sensory stimuli or the production of movement. Statistically, we often want to estimate the parameters of the tuning curve, such as preferred direction, as well as the associated degree of uncertainty, characterized by error bars. Here we present a new sampling-based, Bayesian method that allows the estimation of tuning-curve parameters, the estimation of error bars, and hypothesis testing. This method also provides a useful way of visualizing which tuning curves are compatible with the recorded data. We demonstrate the utility of this approach using recordings of orientation and direction tuning in primary visual cortex, direction of motion tuning in primary motor cortex, and simulated data.
Mamonov, Artem B.; Bhatt, Divesh; Cashman, Derek J.; Ding, Ying; Zuckerman, Daniel M.
2009-01-01
We introduce “library based Monte Carlo” (LBMC) simulation, which performs Boltzmann sampling of molecular systems based on pre-calculated statistical libraries of molecular-fragment configurations, energies, and interactions. The library for each fragment can be Boltzmann distributed and thus account for all correlations internal to the fragment. LBMC can be applied to both atomistic and coarse-grained models, as we demonstrate in this “proof of principle” report. We first verify the approach in a toy model and in implicitly solvated poly-alanine systems. We next study five proteins, up to 309 residues in size. Based on atomistic equilibrium libraries of peptide-plane configurations, the proteins are modeled with fully atomistic backbones and simplified Gō-like interactions among residues. We show that full equilibrium sampling can be obtained in days to weeks on a single processor, suggesting that more accurate models are well within reach. For the future, LBMC provides a convenient platform for constructing adjustable or mixed-resolution models: the configurations of all atoms can be stored at no run-time cost, while an arbitrary subset of interactions is “turned on.” PMID:19594147
Kuruvilla Verghese
2002-04-05
This report summarizes the highlights of the research performed under the 1-year NEER grant from the Department of Energy. The primary goal of this study was to investigate the effects of certain design changes in the Fisher Senoscan mammography system and in the degree of breast compression on the discernability of microcalcifications in calcification clusters often observed in mammograms with tumor lesions. The most important design change that one can contemplate in a digital mammography system to improve resolution of calcifications is the reduction of pixel dimensions of the digital detector. Breast compression is painful to the patient and is though to be a deterrent to women to get routine mammographic screening. Calcification clusters often serve as markers (indicators ) of breast cancer.
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Curran, Patrick J; Bollen, Kenneth A; Paxton, Pamela; Kirby, James; Chen, Feinian
2002-01-01
The noncentral chi-square distribution plays a key role in structural equation modeling (SEM). The likelihood ratio test statistic that accompanies virtually all SEMs asymptotically follows a noncentral chi-square under certain assumptions relating to misspecification and multivariate distribution. Many scholars use the noncentral chi-square distribution in the construction of fit indices, such as Steiger and Lind's (1980) Root Mean Square Error of Approximation (RMSEA) or the family of baseline fit indices (e.g., RNI, CFI), and for the computation of statistical power for model hypothesis testing. Despite this wide use, surprisingly little is known about the extent to which the test statistic follows a noncentral chi-square in applied research. Our study examines several hypotheses about the suitability of the noncentral chi-square distribution for the usual SEM test statistic under conditions commonly encountered in practice. We designed Monte Carlo computer simulation experiments to empirically test these research hypotheses. Our experimental conditions included seven sample sizes ranging from 50 to 1000, and three distinct model types, each with five specifications ranging from a correct model to the severely misspecified uncorrelated baseline model. In general, we found that for models with small to moderate misspecification, the noncentral chi-square distribution is well approximated when the sample size is large (e.g., greater than 200), but there was evidence of bias in both mean and variance in smaller samples. A key finding was that the test statistics for the uncorrelated variable baseline model did not follow the noncentral chi-square distribution for any model type across any sample size. We discuss the implications of our findings for the SEM fit indices and power estimation procedures that are based on the noncentral chi-square distribution as well as potential directions for future research.
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
Ogata, Koji; Soejima, Kenji; Higo, Junichi
2006-10-01
We have developed a computational method of protein design to detect amino acid sequences that are adaptable to given main-chain coordinates of a protein. In this method, the selection of amino acid types employs a Metropolis Monte Carlo method with a scoring function in conjunction with the approximation of free energies computed from 3D structures. To compute the scoring function, a side-chain prediction using another Metropolis Monte Carlo method was performed to select structurally suitable side-chain conformations from a side-chain library. In total, two layers of Monte Carlo procedures were performed, first to select amino acid types (1st layer Monte Carlo) and then to predict side-chain conformations (2nd layers Monte Carlo). We applied this method to sequence design for the entire sequence on the SH3 domain, Protein G, and BPTI. The predicted sequences were similar to those of the wild-type proteins. We compared the results of the predictions with and without the 2nd layer Monte Carlo method. The results revealed that the two-layer Monte Carlo method produced better sequence similarity to the wild-type proteins than the one-layer method. Finally, we applied this method to neuraminidase of influenza virus. The results were consistent with the sequences identified from the isolated viruses.
Ledra, Mohammed; El Hdiy, Abdelillah
2015-09-21
A Monte-Carlo simulation algorithm is used to study electron beam induced current in an intrinsic silicon sample, which contains at its surface a linear arrangement of uncapped nanocrystals positioned in the irradiation trajectory around the hemispherical collecting nano-contact. The induced current is generated by the use of electron beam energy of 5 keV in a perpendicular configuration. Each nanocrystal is considered as a recombination center, and the surface recombination velocity at the free surface is taken to be zero. It is shown that the induced current is affected by the distance separating each nanocrystal from the nano-contact. An increase of this separation distance translates to a decrease of the nanocrystals density and an increase of the minority carrier diffusion length. The results reveal a threshold separation distance from which nanocrystals have no more effect on the collection efficiency, and the diffusion length reaches the value obtained in the absence of nanocrystals. A cross-section characterizing the nano-contact ability to trap carriers was determined.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2007-12-01
Markov chain Monte Carlo (MCMC) methods are widely used in fields ranging from physics and chemistry, to finance, economics and statistical inference for estimating the average properties of complex systems. The convergence rate of MCMC schemes is often observed, however to be disturbingly low, limiting its practical use in many applications. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves. Here we show that significant improvements to the efficiency of MCMC algorithms can be made by using a self-adaptive Differential Evolution search strategy within a population-based evolutionary framework. This scheme differs fundamentally from existing MCMC algorithms, in that trial jumps are simply a fixed multiple of the difference of randomly chosen members of the population using various genetic operators that are adaptively updated during the search. In addition, the algorithm includes randomized subspace sampling to further improve convergence and acceptance rate. Detailed balance and ergodicity of the algorithm are proved, and hydrologic examples show that the proposed method significantly enhances the efficiency and applicability of MCMC simulations to complex, multi-modal search problems.
Petaccia, Mauricio; Segui, Silvina; Castellano, Gustavo
2015-06-01
Electron probe microanalysis (EPMA) is based on the comparison of characteristic intensities induced by monoenergetic electrons. When the electron beam ionizes inner atomic shells and these ionizations cause the emission of characteristic X-rays, secondary fluorescence can occur, originating from ionizations induced by X-ray photons produced by the primary electron interactions. As detectors are unable to distinguish the origin of these characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, characteristic secondary fluorescence enhancement in EPMA has been studied by using the splitting routines offered by PENELOPE 2008 as a variance reduction alternative. This approach is controlled by a single parameter NSPLIT, which represents the desired number of X-ray photon replicas. The dependence of the uncertainties associated with secondary intensities on NSPLIT was studied as a function of the accelerating voltage and the sample composition in a simple binary alloy in which this effect becomes relevant. The achieved efficiencies for the simulated secondary intensities bear a remarkable improvement when increasing the NSPLIT parameter; although in most cases an NSPLIT value of 100 is sufficient, some less likely enhancements may require stronger splitting in order to increase the efficiency associated with the simulation of secondary intensities.
NASA Astrophysics Data System (ADS)
Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon
Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)
40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall sample each batch of gasoline by using one of the following methods: (1) Manual sampling of tanks and... applicable procedures in ASTM method D 5842-95, entitled “Standard Practice for Sampling and Handling of...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...
NASA Astrophysics Data System (ADS)
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Jia, Jianhua; Liu, Zi; Xiao, Xuan; Liu, Bingxiang; Chou, Kuo-Chen
2016-06-07
Carbonylation is a posttranslational modification (PTM or PTLM), where a carbonyl group is added to lysine (K), proline (P), arginine (R), and threonine (T) residue of a protein molecule. Carbonylation plays an important role in orchestrating various biological processes but it is also associated with many diseases such as diabetes, chronic lung disease, Parkinson's disease, Alzheimer's disease, chronic renal failure, and sepsis. Therefore, from the angles of both basic research and drug development, we are facing a challenging problem: for an uncharacterized protein sequence containing many residues of K, P, R, or T, which ones can be carbonylated, and which ones cannot? To address this problem, we have developed a predictor called iCar-PseCp by incorporating the sequence-coupled information into the general pseudo amino acid composition, and balancing out skewed training dataset by Monte Carlo sampling to expand positive subset. Rigorous target cross-validations on a same set of carbonylation-known proteins indicated that the new predictor remarkably outperformed its existing counterparts. For the convenience of most experimental scientists, a user-friendly web-server for iCar-PseCp has been established at http://www.jci-bioinfo.cn/iCar-PseCp, by which users can easily obtain their desired results without the need to go through the complicated mathematical equations involved. It has not escaped our notice that the formulation and approach presented here can also be used to analyze many other problems in computational proteomics.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
Sampling High-Altitude and Stratified Mating Flights of Red Imported Fire Ant
USDA-ARS?s Scientific Manuscript database
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens ...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
NASA Astrophysics Data System (ADS)
Misdaq, M. A.; Khajmi, H.; Ktata, A.
1998-10-01
Radon alpha-activities per unit volume have been measured inside and outside different building material samples by using CR-39 and LR-115 type II solid state nuclear track detectors (SSNTD). Radon emanation coefficients of the studied building materials have been evaluated. The porosities of the building material samples studied have been determined by using a Monte Carlo calculational method adapted to the experimental conditions and compared with data obtained by the Archimedes's method. The influence of the building material porosity on the radon emanation coefficient has been investigated.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only
Coalescent: an open-science framework for importance sampling in coalescent theory
Spouge, John L.
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
NASA Astrophysics Data System (ADS)
Barclay, Thomas; Quintana, Elisa; Adams, Fred; Ciardi, David; Huber, Daniel; Foreman-Mackey, Daniel; Montet, Benjamin Tyler; Caldwell, Douglas
2015-08-01
Kepler-296 is a binary star system with two M-dwarf components separated by 0.2 arcsec. Five transiting planets have been confirmed to be associated with the Kepler-296 system; given the evidence to date, however, the planets could in principle orbit either star. This ambiguity has made it difficult to constrain both the orbital and physical properties of the planets. Using both statistical and analytical arguments, this paper shows that all five planets are highly likely to orbit the primary star in this system. We performed a Markov-Chain Monte Carlo simulation using a five transiting planet model, leaving the stellar density and dilution with uniform priors. Using importance sampling, we compared the model probabilities under the priors of the planets orbiting either the brighter or the fainter component of the binary. A model where the planets orbit the brighter component, Kepler-296A, is strongly preferred by the data. Combined with our assertion that all five planets orbit the same star, the two outer planets in the system, Kepler-296 Ae and Kepler-296 Af, have radii of 1.53 ± 0.26 and 1.80 ± 0.31 R⊕, respectively, and receive incident stellar fluxes of 1.40 ± 0.23 and 0.62 ± 0.10 times the incident flux the Earth receives from the Sun. This level of irradiation places both planets within or close to the circumstellar habitable zone of their parent star.
NASA Astrophysics Data System (ADS)
Rees, L. B.
1990-12-01
It has long been recognized that PIXE (particle-induced X-ray emission) spectra from thick targets need to be modified with respect to the thin target spectra used for calibration. This is due to the degradation of the energy of the protons entering the sample and the attenuation of the X-rays emerging from the sample. Thick-target corrections typically assume the target to be composed of a layer of sample material having uniform thickness. Because many environmental samples, however, are composed of particles averaging several μm in diameter, the usual thick-target corrections are inappropriate. It has previously been shown that size corrections for spherical particles of homogeneous composition can be significant. In the current work a method is presented which employs Monte Carlo techniques to calculate X-ray intensity corrections for particles of arbitrary shape, composition, orientation and size distribution. Empirical equations for proton stopping power and X-ray production cross sections are used in conjunction with X-ray attenuation coefficients to calculate the intensity of the emergent beam. The uncertainty associated with the Monte Carlo calculation is also explored. It is shown that the spherical particle corrections are approximately correct for particles of near-spherical shape; however, they are inadequate for highly elongated or flattened particles or for particles of nonuniform composition.
Sample Bytes to Protect Important Data from Unintentional Transmission in Advanced Embedded Device
NASA Astrophysics Data System (ADS)
Chung, Bo-Heung; Kim, Jung-Nye
Illegal or unintentional file transmission of important data is a sensitive and main security issue in embedded and mobile devices. Within restricted resources such as small memory size and low battery capacity, simple and efficient method is needed to lessen much effort for preventing this illegal activity. Therefore, we discuss a protection technique taking into account these considerations. In our method, sample bytes are extracted from an important file and then it is used to prohibit illegal file transfer and modification. To avoid attacker's easy prediction about the selection position of the sample bytes, it is selected within whole extent of the file by equal distribution and at the random location. To avoid huge increase of the number of the sample bytes, candidate sampling area size of the file is chosen carefully after the analysis of the length and number of files. Also, considering computational overhead to calculate the number and position of the sample bytes to be selected, we propose three types of sampling methods. And we will show the evaluation result of these methods and recommend proper sampling approach to embedded device with low computational power. With the help of this technique, it has advantages that data leakage can be protected and prohibited effectively and the device can be managed securely within low overhead.
ROMERO,VICENTE J.
2000-05-04
In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.
Silvia, Paul J; Kwapil, Thomas R; Walsh, Molly A; Myin-Germeys, Inez
2014-03-01
Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler's toolbox.
A field test of cut-off importance sampling for bole volume
Jeffrey H. Gove; Harry T. Valentine; Michael J. Holmes
2000-01-01
Cut-off importance sampling has recently been introduced as a technique for estimating bole volume to some point below the tree tip, termed the cut-off point. A field test of this technique was conducted on a small population of eastern white pine trees using dendrometry as the standard for volume estimation. Results showed that the differences in volume estimates...
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability
2015-07-01
Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...Seattle, USA Johannes O. Royset Associate Professor, Operations Research Dept., Naval Postgraduate School , Monterey, USA ABSTRACT: Engineering design is...criterion is usually the failure probability. In this paper, we examine the buffered failure probability as an attractive alternative to the failure
Smyth, Nina; Thorn, Lisa; Hucklebridge, Frank; Evans, Phil; Clow, Angela
2015-08-01
Indices of post awakening cortisol secretion (PACS), include the rise in cortisol (cortisol awakening response: CAR) and overall cortisol concentrations (e.g., area under the curve with reference to ground: AUCg) in the first 30-45 min. Both are commonly investigated in relation to psychosocial variables. Although sampling within the domestic setting is ecologically valid, participant non-adherence to the required timing protocol results in erroneous measurement of PACS and this may explain discrepancies in the literature linking these measures to trait well-being (TWB). We have previously shown that delays of little over 5 min (between awakening and the start of sampling) to result in erroneous CAR estimates. In this study, we report for the first time on the negative impact of sample timing inaccuracy (verified by electronic-monitoring) on the efficacy to detect significant relationships between PACS and TWB when measured in the domestic setting. Healthy females (N=49, 20.5±2.8 years) selected for differences in TWB collected saliva samples (S1-4) on 4 days at 0, 15, 30, 45 min post awakening, to determine PACS. Adherence to the sampling protocol was objectively monitored using a combination of electronic estimates of awakening (actigraphy) and sampling times (track caps). Relationships between PACS and TWB were found to depend on sample timing accuracy. Lower TWB was associated with higher post awakening cortisol AUCg in proportion to the mean sample timing accuracy (p<.005). There was no association between TWB and the CAR even taking into account sample timing accuracy. These results highlight the importance of careful electronic monitoring of participant adherence for measurement of PACS in the domestic setting. Mean sample timing inaccuracy, mainly associated with delays of >5 min between awakening and collection of sample 1 (median=8 min delay), negatively impacts on the sensitivity of analysis to detect associations between PACS and TWB.
Huang, Wei; Lin, Zhixiong; van Gunsteren, Wilfred F
2014-06-19
The predictive power of biomolecular simulation critically depends on the quality of the force field or molecular model used and on the extent of conformational sampling that can be achieved. Both issues are addressed. First, it is shown that widely used force fields for simulation of proteins in aqueous solution appear to have rather different propensities to stabilize or destabilize α-, π-, and 3(10)- helical structures, which is an important feature of a biomolecular force field due to the omni-presence of such secondary structure in proteins. Second, the relative stability of secondary structure elements in proteins can only be computationally determined through so-called free-energy calculations, the accuracy of which critically depends on the extent of configurational sampling. It is shown that the method of enveloping distribution sampling is a very efficient method to extensively sample different parts of configurational space.
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2017-01-10
Antibodies recognize antigens through the complementary determining regions (CDR) formed by six-loop hypervariable regions crucial for the diversity of antigen specificities. Among the six CDR loops, the H3 loop is the most challenging to predict because of its much higher variation in sequence length and identity, resulting in much larger and complex structural space, compared to the other five loops. We developed a novel method based on a chain-growth sequential Monte Carlo method, called distance-guided sequential chain-growth Monte Carlo for H3 loops (DiSGro-H3). The new method samples protein chains in both forward and backward directions. It can efficiently generate low energy, near-native H3 loop structures using the conformation types predicted from the sequences of H3 loops. DiSGro-H3 performs significantly better than another ab initio method, RosettaAntibody, in both sampling and prediction, while taking less computational time. It performs comparably to template-based methods. As an ab initio method, DiSGro-H3 offers satisfactory accuracy while being able to predict any H3 loops without templates.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 and tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
Catching Stardust and Bringing it Home: The Astronomical Importance of Sample Return
NASA Astrophysics Data System (ADS)
Brownlee, D.
2002-12-01
orbit of Mars will provide important insight into the materials, environments and processes that occurred from the central regions to outer fringes of the solar nebula. One of the most exciting aspects of the January 2006 return of comet samples will be the synergistic linking of data on real comet and interstellar dust samples with the vast amount of astronomical data on these materials and analogous particles that orbit other stars Stardust is a NASA Discovery mission that has successfully traveled over 2.5 billion kilometers.
Baba, Justin S; Koju, Vijay; John, Dwayne O
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Importance sampling variance reduction for the Fokker-Planck rarefied gas particle method
NASA Astrophysics Data System (ADS)
Collyer, B. S.; Connaughton, C.; Lockerby, D. A.
2016-11-01
The Fokker-Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.
Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method
Collyer, B.S.; Connaughton, C.; Lockerby, D.A.
2016-11-15
The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.
Fragoso, Zachary L; Holcombe, Kyla J; McCluney, Courtney L; Fisher, Gwenith G; McGonagle, Alyssa K; Friebe, Susan J
2016-06-09
This study's purpose was twofold: first, to examine the relative importance of job demands and resources as predictors of burnout and engagement, and second, the relative importance of engagement and burnout related to health, depressive symptoms, work ability, organizational commitment, and turnover intentions in two samples of health care workers. Nurse leaders (n = 162) and licensed emergency medical technicians (EMTs; n = 102) completed surveys. In both samples, job demands predicted burnout more strongly than job resources, and job resources predicted engagement more strongly than job demands. Engagement held more weight than burnout for predicting commitment, and burnout held more weight for predicting health outcomes, depressive symptoms, and work ability. Results have implications for the design, evaluation, and effectiveness of workplace interventions to reduce burnout and improve engagement among health care workers. Actionable recommendations for increasing engagement and decreasing burnout in health care organizations are provided.
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Importance sampling allows Hd true tests of highly discriminating DNA profiles.
Taylor, Duncan; Curran, James M; Buckleton, John
2017-03-01
Hd true testing is a way of assessing the performance of a model, or DNA profile interpretation system. These tests involve simulating DNA profiles of non-donors to a DNA mixture and calculating a likelihood ratio (LR) with one proposition postulating their contribution and the alternative postulating their non-contribution. Following Turing it is possible to predict that "The average LR for the Hd true tests should be one"[1]. This suggests a way of validating softwares. During discussions on the ISFG software validation guidelines [2] it was argued by some that this prediction had not been sufficiently examined experimentally to serve as a criterion for validation. More recently a high profile report [3] has emphasised large scale empirical examination. A limitation with Hd true tests, when non-donor profiles are generated at random (or in accordance with expectation from allele frequencies), is that the number of tests required depends on the discrimination power of the evidence profile. If the Hd true tests are to fully explore the genotype space that yields non-zero LRs then the number of simulations required could be in the 10s of orders of magnitude (well outside practical computing limits). We describe here the use of importance sampling, which allows the simulation of rare events to occur more commonly than they would at random, and then adjusting for this bias at the end of the simulation in order to recover all diagnostic values of interest. Importance sampling, whilst having been employed by others for Hd true tests, is largely unknown in forensic genetics. We take time in this paper to explain how importance sampling works, the advantages of using it and its application to Hd true tests. We conclude by showing that employing an importance sampling scheme brings Hd true testing ability to all profiles, regardless of discrimination power. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Longuespée, Rémi; Alberts, Deborah; Pottier, Charles; Smargiasso, Nicolas; Mazzucchelli, Gabriel; Baiwir, Dominique; Kriegsmann, Mark; Herfs, Michael; Kriegsmann, Jörg; Delvenne, Philippe; De Pauw, Edwin
2016-07-15
Proteomic methods are today widely applied to formalin-fixed paraffin-embedded (FFPE) tissue samples for several applications in research, especially in molecular pathology. To date, there is an unmet need for the analysis of small tissue samples, such as for early cancerous lesions. Indeed, no method has yet been proposed for the reproducible processing of small FFPE tissue samples to allow biomarker discovery. In this work, we tested several procedures to process laser microdissected tissue pieces bearing less than 3000 cells. Combined with appropriate settings for liquid chromatography mass spectrometry-mass spectrometry (LC-MS/MS) analysis, a citric acid antigen retrieval (CAAR)-based procedure was established, allowing to identify more than 1400 proteins from a single microdissected breast cancer tissue biopsy. This work demonstrates important considerations concerning the handling and processing of laser microdissected tissue samples of extremely limited size, in the process opening new perspectives in molecular pathology. A proof of the proposed method for biomarker discovery, with respect to these specific handling considerations, is illustrated using the differential proteomic analysis of invasive breast carcinoma of no special type and invasive lobular triple-negative breast cancer tissues. This work will be of utmost importance for early biomarker discovery or in support of matrix-assisted laser desorption/ionization (MALDI) imaging for microproteomics from small regions of interest. Copyright © 2016. Published by Elsevier Inc.
Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S
2014-09-16
Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders.
Reconstruction of Monte Carlo replicas from Hessian parton distributions
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.
2017-03-01
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Baccouche, S; Al-Azmi, D; Karunakara, N; Trabelsi, A
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides (137)Cs (661keV), (40)K (1460keV), (238)U ((214)Bi, 1764keV) and (232)Th ((208)Tl, 2614keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614keV emission of (208)Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples.
The importance of a priori sample size estimation in strength and conditioning research.
Beck, Travis W
2013-08-01
The statistical power, or sensitivity of an experiment, is defined as the probability of rejecting a false null hypothesis. Only 3 factors can affect statistical power: (a) the significance level (α), (b) the magnitude or size of the treatment effect (effect size), and (c) the sample size (n). Of these 3 factors, only the sample size can be manipulated by the investigator because the significance level is usually selected before the study, and the effect size is determined by the effectiveness of the treatment. Thus, selection of an appropriate sample size is one of the most important components of research design but is often misunderstood by beginning researchers. The purpose of this tutorial is to describe procedures for estimating sample size for a variety of different experimental designs that are common in strength and conditioning research. Emphasis is placed on selecting an appropriate effect size because this step fully determines sample size when power and the significance level are fixed. There are many different software packages that can be used for sample size estimation. However, I chose to describe the procedures for the G*Power software package (version 3.1.4) because this software is freely downloadable and capable of estimating sample size for many of the different statistical tests used in strength and conditioning research. Furthermore, G*Power provides a number of different auxiliary features that can be useful for researchers when designing studies. It is my hope that the procedures described in this article will be beneficial for researchers in the field of strength and conditioning.
2011-01-01
Many European protected areas were legally created to preserve and maintain biological diversity, unique natural features and associated cultural heritage. Built over centuries as a result of geographical and historical factors interacting with human activity, these territories are reservoirs of resources, practices and knowledge that have been the essential basis of their creation. Under social and economical transformations several components of such areas tend to be affected and their protection status endangered. Carrying out ethnobotanical surveys and extensive field work using anthropological methodologies, particularly with key-informants, we report changes observed and perceived in two natural parks in Trás-os-Montes, Portugal, that affect local plant-use systems and consequently local knowledge. By means of informants' testimonies and of our own observation and experience we discuss the importance of local knowledge and of local communities' participation to protected areas design, management and maintenance. We confirm that local knowledge provides new insights and opportunities for sustainable and multipurpose use of resources and offers contemporary strategies for preserving cultural and ecological diversity, which are the main purposes and challenges of protected areas. To be successful it is absolutely necessary to make people active participants, not simply integrate and validate their knowledge and expertise. Local knowledge is also an interesting tool for educational and promotional programs. PMID:22112242
Carvalho, Ana Maria; Frazão-Moreira, Amélia
2011-11-23
Many European protected areas were legally created to preserve and maintain biological diversity, unique natural features and associated cultural heritage. Built over centuries as a result of geographical and historical factors interacting with human activity, these territories are reservoirs of resources, practices and knowledge that have been the essential basis of their creation. Under social and economical transformations several components of such areas tend to be affected and their protection status endangered.Carrying out ethnobotanical surveys and extensive field work using anthropological methodologies, particularly with key-informants, we report changes observed and perceived in two natural parks in Trás-os-Montes, Portugal, that affect local plant-use systems and consequently local knowledge. By means of informants' testimonies and of our own observation and experience we discuss the importance of local knowledge and of local communities' participation to protected areas design, management and maintenance. We confirm that local knowledge provides new insights and opportunities for sustainable and multipurpose use of resources and offers contemporary strategies for preserving cultural and ecological diversity, which are the main purposes and challenges of protected areas. To be successful it is absolutely necessary to make people active participants, not simply integrate and validate their knowledge and expertise. Local knowledge is also an interesting tool for educational and promotional programs.
NASA Astrophysics Data System (ADS)
Naqvi, A. A.
2003-08-01
Monte Carlo calculations have been carried out to determine the prompt gamma ray yield from a Portland cement sample using keV neutrons from a 3H(p,n) reaction with a Maxwellian energy distribution with kT=52 keV. This work is a part of wider Monte Carlo studies being conducted at the King Fahd University of Petroleum and Minerals (KFUPM) in search of a more efficient neutron source for its D(d,n) reaction based (2.8 MeV neutrons) Prompt Gamma Ray Neutron Activation Analysis (PGNAA) facility. In this study a 3H(p,n) reaction based prompt gamma ray PGNAA setup was simulated. For comparison purposes, the diameter of a cylindrical external moderator of the 3H(p,n) reaction based PGNAA setup was assumed to be similar to the one used in the KFUPM PGNAA setup. The results of this study revealed that the optimum geometry of the 3H(p,n) reaction based setup is different from that of the KFUPM PGNAA facility. The performance of the 3H(p,n) reaction based setup is also better than that of the 2.8 MeV neutrons based KFUPM facility and its prompt gamma ray yield is about 60-70% higher than that from the 2.8 MeV neutrons based facility. This study has provided a theoretical base for experimental test of a 3H(p,n) reaction based setup.
NASA Technical Reports Server (NTRS)
Welzenbach, L. C.; McCoy, T. J.; Glavin, D. P.; Dworkin, J. P.; Abell, P. A.
2012-01-01
turn led to a new wave of Mars exploration that ultimately could lead to sample return focused on evidence for past or present life. This partnership between collections and missions will be increasingly important in the coming decades as we discover new questions to be addressed and identify targets for for both robotic and human exploration . Nowhere is this more true than in the ultimate search for the abiotic and biotic processes that produced life. Existing collections also provide the essential materials for developing and testing new analytical schemes to detect the rare markers of life and distinguish them from abiotic processes. Large collections of meteorites and the new types being identified within these collections, which come to us at a fraction of the cost of a sample return mission, will continue to shape the objectives of future missions and provide new ways of interpreting returned samples.
Sampling high-altitude and stratified mating flights of red imported fire ant.
Fritz, Gary N; Fritz, Ann H; Vander Meer, Robert K
2011-05-01
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens and males during mating flights at altitudinal intervals reaching as high as "140 m. Our trapping system uses an electric winch and a 1.2-m spindle bolted to a swiveling platform. The winch dispenses up to 183 m of Kevlar-core, nylon rope and the spindle stores 10 panels (0.9 by 4.6 m each) of nylon tulle impregnated with Tangle-Trap. The panels can be attached to the rope at various intervals and hoisted into the air by using a 3-m-diameter, helium-filled balloon. Raising or lowering all 10 panels takes approximately 15-20 min. This trap also should be useful for altitudinal sampling of other insects of medical importance.
2009-08-01
method [JChem. Phys. 130, 164104(2009) is applied to fluid N2. In this implementation of n(MC)2, isothermal - isobaric (NPT) ensemble sampling on the...Phys. 130, 164104 2009 is applied to fluid N2. In this implementation of nMC2, isothermal - isobaric NPT ensemble sampling on the basis of a pair...and Wk is a thermodynamic function appropriate to the ensemble being sampled. In the isothermal – isobaric NPT ensemble used below, W is defined as Wk
Classifying Imbalanced Data Streams via Dynamic Feature Group Weighting with Importance Sampling.
Wu, Ke; Edwards, Andrea; Fan, Wei; Gao, Jing; Zhang, Kun
2014-04-01
Data stream classification and imbalanced data learning are two important areas of data mining research. Each has been well studied to date with many interesting algorithms developed. However, only a few approaches reported in literature address the intersection of these two fields due to their complex interplay. In this work, we proposed an importance sampling driven, dynamic feature group weighting framework (DFGW-IS) for classifying data streams of imbalanced distribution. Two components are tightly incorporated into the proposed approach to address the intrinsic characteristics of concept-drifting, imbalanced streaming data. Specifically, the ever-evolving concepts are tackled by a weighted ensemble trained on a set of feature groups with each sub-classifier (i.e. a single classifier or an ensemble) weighed by its discriminative power and stable level. The un-even class distribution, on the other hand, is typically battled by the sub-classifier built in a specific feature group with the underlying distribution rebalanced by the importance sampling technique. We derived the theoretical upper bound for the generalization error of the proposed algorithm. We also studied the empirical performance of our method on a set of benchmark synthetic and real world data, and significant improvement has been achieved over the competing algorithms in terms of standard evaluation metrics and parallel running time. Algorithm implementations and datasets are available upon request.
Classifying Imbalanced Data Streams via Dynamic Feature Group Weighting with Importance Sampling
Wu, Ke; Edwards, Andrea; Fan, Wei; Gao, Jing; Zhang, Kun
2014-01-01
Data stream classification and imbalanced data learning are two important areas of data mining research. Each has been well studied to date with many interesting algorithms developed. However, only a few approaches reported in literature address the intersection of these two fields due to their complex interplay. In this work, we proposed an importance sampling driven, dynamic feature group weighting framework (DFGW-IS) for classifying data streams of imbalanced distribution. Two components are tightly incorporated into the proposed approach to address the intrinsic characteristics of concept-drifting, imbalanced streaming data. Specifically, the ever-evolving concepts are tackled by a weighted ensemble trained on a set of feature groups with each sub-classifier (i.e. a single classifier or an ensemble) weighed by its discriminative power and stable level. The un-even class distribution, on the other hand, is typically battled by the sub-classifier built in a specific feature group with the underlying distribution rebalanced by the importance sampling technique. We derived the theoretical upper bound for the generalization error of the proposed algorithm. We also studied the empirical performance of our method on a set of benchmark synthetic and real world data, and significant improvement has been achieved over the competing algorithms in terms of standard evaluation metrics and parallel running time. Algorithm implementations and datasets are available upon request. PMID:25568835
NASA Astrophysics Data System (ADS)
Chiruta, D.; Linares, J.; Dahoo, P. R.; Dimian, M.
2015-02-01
In spin crossover (SCO) systems, the shape of the hysteresis curves are closely related to the interactions between the molecules, which these play an important role in the response of the system to an external parameter. The effects of short-range interactions on the different shape of the spin transition phenomena were investigated. In this contribution we solve the corresponding Hamiltonian for a three-dimensional SCO system taking into account short-range and long-range interaction using a biased Monte Carlo entropic sampling technique and a semi-analytical method. We discuss the competition between the two interactions which governs the low spin (LS) - high spin (HS) process for a three-dimensional network and the cooperative effects. We demonstrate a strong correlation between the shape of the transition and the strength of short-range interaction between molecules and we identified the role of the size for SCO systems.
Alfaro, Michael E; Zoller, Stefan; Lutzoni, François
2003-02-01
Bayesian Markov chain Monte Carlo sampling has become increasingly popular in phylogenetics as a method for both estimating the maximum likelihood topology and for assessing nodal confidence. Despite the growing use of posterior probabilities, the relationship between the Bayesian measure of confidence and the most commonly used confidence measure in phylogenetics, the nonparametric bootstrap proportion, is poorly understood. We used computer simulation to investigate the behavior of three phylogenetic confidence methods: Bayesian posterior probabilities calculated via Markov chain Monte Carlo sampling (BMCMC-PP), maximum likelihood bootstrap proportion (ML-BP), and maximum parsimony bootstrap proportion (MP-BP). We simulated the evolution of DNA sequence on 17-taxon topologies under 18 evolutionary scenarios and examined the performance of these methods in assigning confidence to correct monophyletic and incorrect monophyletic groups, and we examined the effects of increasing character number on support value. BMCMC-PP and ML-BP were often strongly correlated with one another but could provide substantially different estimates of support on short internodes. In contrast, BMCMC-PP correlated poorly with MP-BP across most of the simulation conditions that we examined. For a given threshold value, more correct monophyletic groups were supported by BMCMC-PP than by either ML-BP or MP-BP. When threshold values were chosen that fixed the rate of accepting incorrect monophyletic relationship as true at 5%, all three methods recovered most of the correct relationships on the simulated topologies, although BMCMC-PP and ML-BP performed better than MP-BP. BMCMC-PP was usually a less biased predictor of phylogenetic accuracy than either bootstrapping method. BMCMC-PP provided high support values for correct topological bipartitions with fewer characters than was needed for nonparametric bootstrap.
Biau, David Jean; Kernéis, Solen; Porcher, Raphaël
2008-09-01
The increasing volume of research by the medical community often leads to increasing numbers of contradictory findings and conclusions. Although the differences observed may represent true differences, the results also may differ because of sampling variability as all studies are performed on a limited number of specimens or patients. When planning a study reporting differences among groups of patients or describing some variable in a single group, sample size should be considered because it allows the researcher to control for the risk of reporting a false-negative finding (Type II error) or to estimate the precision his or her experiment will yield. Equally important, readers of medical journals should understand sample size because such understanding is essential to interpret the relevance of a finding with regard to their own patients. At the time of planning, the investigator must establish (1) a justifiable level of statistical significance, (2) the chances of detecting a difference of given magnitude between the groups compared, ie, the power, (3) this targeted difference (ie, effect size), and (4) the variability of the data (for quantitative data). We believe correct planning of experiments is an ethical issue of concern to the entire community.
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty.
Ait Kaci Azzou, Sadoune; Larribe, Fabrice; Froda, Sorana
2015-01-01
The effective population size over time (demographic history) can be retraced from a sample of contemporary DNA sequences. In this paper, we propose a novel methodology based on importance sampling (IS) for exploring such demographic histories. Our starting point is the generalized skyline plot with the main difference being that our procedure, skywis plot, uses a large number of genealogies. The information provided by these genealogies is combined according to the IS weights. Thus, we compute a weighted average of the effective population sizes on specific time intervals (epochs), where the genealogies that agree more with the data are given more weight. We illustrate by a simulation study that the skywis plot correctly reconstructs the recent demographic history under the scenarios most commonly considered in the literature. In particular, our method can capture a change point in the effective population size, and its overall performance is comparable with the one of the bayesian skyline plot. We also introduce the case of serially sampled sequences and illustrate that it is possible to improve the performance of the skywis plot in the case of an exponential expansion of the effective population size. PMID:26300910
Randeniya, S; Mirkovic, D; Titt, U; Guan, F; Mohan, R
2014-06-01
Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Of the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations
Blood Sampling Seasonality as an Important Preanalytical Factor for Assessment of Vitamin D Status
Bonelli, Patrizia; Buonocore, Ruggero; Aloe, Rosalia
2016-01-01
Summary Background The measurement of vitamin D is now commonplace for preventing osteoporosis and restoring an appropriate concentration that would be effective to counteract the occurrence of other human disorders. The aim of this study was to establish whether blood sampling seasonality may influence total vitamin D concentration in a general population of Italian unselected outpatients. Methods We performed a retrospective search in the laboratory information system of the University Hospital of Parma (Italy, temperate climate), to identify the values of total serum vitamin D (25-hydroxyvitamin D) measured in outpatients aged 18 years and older, who were referred for routine health check-up during the entire year 2014. Results The study population consisted in 11,150 outpatients (median age 62 years; 8592 women and 2558 men). The concentration of vitamin D was consistently lower in samples collected in Winter than in the other three seasons. The frequency of subjects with vitamin D deficiency was approximately double in samples drawn in Winter and Spring than in Summer and Autumn. In the multivariate analysis, the concentration of total vitamin D was found to be independently associated with sex and season of blood testing, but not with the age of the patients. Conclusions According to these findings, blood sampling seasonality should be regarded as an important preanalytical factor in vitamin D assessment. It is also reasonable to suggest that the amount of total vitamin D synthesized during the summer should be high enough to maintain the levels > 50 nmol/L throughout the remaining part of the year. PMID:28356869
Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik
2016-01-01
Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Denning, Elizabeth J.; Woolf, Thomas B.
2009-01-01
The growing dataset of K+ channel x-ray structures provides an excellent opportunity to begin a detailed molecular understanding of voltage-dependent gating. These structures, while differing in sequence, represent either a stable open or closed state. However, an understanding of the molecular details of gating will require models for the transitions and experimentally testable predictions for the gating transition. To explore these ideas, we apply Dynamic Importance Sampling (DIMS) to a set of homology models for the molecular conformations of K+ channels for four different sets of sequences and eight different states. In our results, we highlight the importance of particular residues upstream from the PVP region to the gating transition. This supports growing evidence that the PVP region is important for influencing the flexibility of the S6 helix and thus the opening of the gating domain. The results further suggest how gating on the molecular level depends on intra-subunit motions to influence the cooperative behavior of all four subunits of the K+ channel. We hypothesize that the gating process occurs in steps: first sidechain movement, then inter- S5-S6 subunit motions, and lastly the large-scale domain rearrangements. PMID:19950367
Gomes, Andrew J.; Turzhitsky, Vladimir; Ruderman, Sarah; Backman, Vadim
2013-01-01
Polarization-gating has been widely used to probe superficial tissue structures, but the penetration depth properties of this method have not been completely elucidated. This study employs a polarization-sensitive Monte Carlo method to characterize the penetration depth statistics of polarization-gating. The analysis demonstrates that the penetration depth depends on both the illumination-collection geometry [illumination-collection area (R) and collection angle (θc)] and on the optical properties of the sample, which include the scattering coefficient (μs), absorption coefficient (μa), anisotropy factor (g), and the type of the phase function. We develop a mathematical expression relating the average penetration depth to the illumination-collection beam properties and optical properties of the medium. Finally, we quantify the sensitivity of the average penetration depth to changes in optical properties for different geometries of illumination and collection. The penetration depth model derived in this study can be applied to optimizing application-specific fiber-optic probes to target a sampling depth of interest with minimal sensitivity to the optical properties of the sample. PMID:22781238
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
Fenley, Marcia O; Mascagni, Michael; McClain, James; Silalahi, Alexander R J; Simonov, Nikolai A
2010-01-01
Dielectric continuum or implicit solvent models provide a significant reduction in computational cost when accounting for the salt-mediated electrostatic interactions of biomolecules immersed in an ionic environment. These models, in which the solvent and ions are replaced by a dielectric continuum, seek to capture the average statistical effects of the ionic solvent, while the solute is treated at the atomic level of detail. For decades, the solution of the three-dimensional Poisson-Boltzmann equation (PBE), which has become a standard implicit-solvent tool for assessing electrostatic effects in biomolecular systems, has been based on various deterministic numerical methods. Some deterministic PBE algorithms have drawbacks, which include a lack of properly assessing their accuracy, geometrical difficulties caused by discretization, and for some problems their cost in both memory and computation time. Our original stochastic method resolves some of these difficulties by solving the PBE using the Monte Carlo method (MCM). This new approach to the PBE is capable of efficiently solving complex, multi-domain and salt-dependent problems in biomolecular continuum electrostatics to high precision. Here we improve upon our novel stochastic approach by simultaneouly computating of electrostatic potential and solvation free energies at different ionic concentrations through correlated Monte Carlo (MC) sampling. By using carefully constructed correlated random walks in our algorithm, we can actually compute the solution to a standard system including the linearized PBE (LPBE) at all salt concentrations of interest, simultaneously. This approach not only accelerates our MCPBE algorithm, but seems to have cost and accuracy advantages over deterministic methods as well. We verify the effectiveness of this technique by applying it to two common electrostatic computations: the electrostatic potential and polar solvation free energy for calcium binding proteins that are compared
Morton, S E; Chiew, Y S; Pretty, C; Moltchanova, E; Scarrott, C; Redmond, D; Shaw, G M; Chase, J G
2017-02-01
Randomised control trials have sought to seek to improve mechanical ventilation treatment. However, few trials to date have shown clinical significance. It is hypothesised that aside from effective treatment, the outcome metrics and sample sizes of the trial also affect the significance, and thus impact trial design. In this study, a Monte-Carlo simulation method was developed and used to investigate several outcome metrics of ventilation treatment, including 1) length of mechanical ventilation (LoMV); 2) Ventilator Free Days (VFD); and 3) LoMV-28, a combination of the other metrics. As these metrics have highly skewed distributions, it also investigated the impact of imposing clinically relevant exclusion criteria on study power to enable better design for significance. Data from invasively ventilated patients from a single intensive care unit were used in this analysis to demonstrate the method. Use of LoMV as an outcome metric required 160 patients/arm to reach 80% power with a clinically expected intervention difference of 25% LoMV if clinically relevant exclusion criteria were applied to the cohort, but 400 patients/arm if they were not. However, only 130 patients/arm would be required for the same statistical significance at the same intervention difference if VFD was used. A Monte-Carlo simulation approach using local cohort data combined with objective patient selection criteria can yield better design of ventilation studies to desired power and significance, with fewer patients per arm than traditional trial design methods, which in turn reduces patient risk. Outcome metrics, such as VFD, should be used when a difference in mortality is also expected between the two cohorts. Finally, the non-parametric approach taken is readily generalisable to a range of trial types where outcome data is similarly skewed.
Adaptive importance sampling to accelerate training of a neural probabilistic language model.
Bengio, Y; Senecal, J S
2008-04-01
Previous work on statistical language modeling has shown that it is possible to train a feedforward neural network to approximate probabilities over sequences of words, resulting in significant error reduction when compared to standard baseline models based on n-grams. However, training the neural network model with the maximum-likelihood criterion requires computations proportional to the number of words in the vocabulary. In this paper, we introduce adaptive importance sampling as a way to accelerate training of the model. The idea is to use an adaptive n-gram model to track the conditional distributions produced by the neural network. We show that a very significant speedup can be obtained on standard problems.
Shreif, Zeina; Striegel, Deborah A.
2015-01-01
A nucleotide sequence 35 base pairs long can take 1,180,591,620,717,411,303,424 possible values. An example of systems biology datasets, protein binding microarrays, contain activity data from about 40000 such sequences. The discrepancy between the number of possible configurations and the available activities is enormous. Thus, albeit that systems biology datasets are large in absolute terms, they oftentimes require methods developed for rare events due to the combinatorial increase in the number of possible configurations of biological systems. A plethora of techniques for handling large datasets, such as Empirical Bayes, or rare events, such as importance sampling, have been developed in the literature, but these cannot always be simultaneously utilized. Here we introduce a principled approach to Empirical Bayes based on importance sampling, information theory, and theoretical physics in the general context of sequence phenotype model induction. We present the analytical calculations that underlie our approach. We demonstrate the computational efficiency of the approach on concrete examples, and demonstrate its efficacy by applying the theory to publicly available protein binding microarray transcription factor datasets and to data on synthetic cAMP-regulated enhancer sequences. As further demonstrations, we find transcription factor binding motifs, predict the activity of new sequences and extract the locations of transcription factor binding sites. In summary, we present a novel method that is efficient (requiring minimal computational time and reasonable amounts of memory), has high predictive power that is comparable with that of models with hundreds of parameters, and has a limited number of optimized parameters, proportional to the sequence length. PMID:26092377
Shreif, Zeina; Striegel, Deborah A; Periwal, Vipul
2015-09-07
A nucleotide sequence 35 base pairs long can take 1,180,591,620,717,411,303,424 possible values. An example of systems biology datasets, protein binding microarrays, contain activity data from about 40,000 such sequences. The discrepancy between the number of possible configurations and the available activities is enormous. Thus, albeit that systems biology datasets are large in absolute terms, they oftentimes require methods developed for rare events due to the combinatorial increase in the number of possible configurations of biological systems. A plethora of techniques for handling large datasets, such as Empirical Bayes, or rare events, such as importance sampling, have been developed in the literature, but these cannot always be simultaneously utilized. Here we introduce a principled approach to Empirical Bayes based on importance sampling, information theory, and theoretical physics in the general context of sequence phenotype model induction. We present the analytical calculations that underlie our approach. We demonstrate the computational efficiency of the approach on concrete examples, and demonstrate its efficacy by applying the theory to publicly available protein binding microarray transcription factor datasets and to data on synthetic cAMP-regulated enhancer sequences. As further demonstrations, we find transcription factor binding motifs, predict the activity of new sequences and extract the locations of transcription factor binding sites. In summary, we present a novel method that is efficient (requiring minimal computational time and reasonable amounts of memory), has high predictive power that is comparable with that of models with hundreds of parameters, and has a limited number of optimized parameters, proportional to the sequence length. Published by Elsevier Ltd.
Prey Selection by an Apex Predator: The Importance of Sampling Uncertainty
Davis, Miranda L.; Stephens, Philip A.; Willis, Stephen G.; Bassi, Elena; Marcon, Andrea; Donaggio, Emanuela; Capitani, Claudia; Apollonio, Marco
2012-01-01
The impact of predation on prey populations has long been a focus of ecologists, but a firm understanding of the factors influencing prey selection, a key predictor of that impact, remains elusive. High levels of variability observed in prey selection may reflect true differences in the ecology of different communities but might also reflect a failure to deal adequately with uncertainties in the underlying data. Indeed, our review showed that less than 10% of studies of European wolf predation accounted for sampling uncertainty. Here, we relate annual variability in wolf diet to prey availability and examine temporal patterns in prey selection; in particular, we identify how considering uncertainty alters conclusions regarding prey selection. Over nine years, we collected 1,974 wolf scats and conducted drive censuses of ungulates in Alpe di Catenaia, Italy. We bootstrapped scat and census data within years to construct confidence intervals around estimates of prey use, availability and selection. Wolf diet was dominated by boar (61.5±3.90 [SE] % of biomass eaten) and roe deer (33.7±3.61%). Temporal patterns of prey densities revealed that the proportion of roe deer in wolf diet peaked when boar densities were low, not when roe deer densities were highest. Considering only the two dominant prey types, Manly's standardized selection index using all data across years indicated selection for boar (mean = 0.73±0.023). However, sampling error resulted in wide confidence intervals around estimates of prey selection. Thus, despite considerable variation in yearly estimates, confidence intervals for all years overlapped. Failing to consider such uncertainty could lead erroneously to the assumption of differences in prey selection among years. This study highlights the importance of considering temporal variation in relative prey availability and accounting for sampling uncertainty when interpreting the results of dietary studies. PMID:23110122
Prey selection by an apex predator: the importance of sampling uncertainty.
Davis, Miranda L; Stephens, Philip A; Willis, Stephen G; Bassi, Elena; Marcon, Andrea; Donaggio, Emanuela; Capitani, Claudia; Apollonio, Marco
2012-01-01
The impact of predation on prey populations has long been a focus of ecologists, but a firm understanding of the factors influencing prey selection, a key predictor of that impact, remains elusive. High levels of variability observed in prey selection may reflect true differences in the ecology of different communities but might also reflect a failure to deal adequately with uncertainties in the underlying data. Indeed, our review showed that less than 10% of studies of European wolf predation accounted for sampling uncertainty. Here, we relate annual variability in wolf diet to prey availability and examine temporal patterns in prey selection; in particular, we identify how considering uncertainty alters conclusions regarding prey selection.Over nine years, we collected 1,974 wolf scats and conducted drive censuses of ungulates in Alpe di Catenaia, Italy. We bootstrapped scat and census data within years to construct confidence intervals around estimates of prey use, availability and selection. Wolf diet was dominated by boar (61.5 ± 3.90 [SE] % of biomass eaten) and roe deer (33.7 ± 3.61%). Temporal patterns of prey densities revealed that the proportion of roe deer in wolf diet peaked when boar densities were low, not when roe deer densities were highest. Considering only the two dominant prey types, Manly's standardized selection index using all data across years indicated selection for boar (mean = 0.73 ± 0.023). However, sampling error resulted in wide confidence intervals around estimates of prey selection. Thus, despite considerable variation in yearly estimates, confidence intervals for all years overlapped. Failing to consider such uncertainty could lead erroneously to the assumption of differences in prey selection among years. This study highlights the importance of considering temporal variation in relative prey availability and accounting for sampling uncertainty when interpreting the results of dietary studies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Examination of goods by importer; sampling... goods by importer; sampling; repacking; examination of merchandise by prospective purchasers. Importers... conduct of Customs business and no danger to the revenue prospective purchaser may be permitted to examine...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 1 2011-04-01 2011-04-01 false Examination of goods by importer; sampling... goods by importer; sampling; repacking; examination of merchandise by prospective purchasers. Importers... conduct of Customs business and no danger to the revenue prospective purchaser may be permitted to examine...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 1 2013-04-01 2013-04-01 false Examination of goods by importer; sampling... goods by importer; sampling; repacking; examination of merchandise by prospective purchasers. Importers... conduct of Customs business and no danger to the revenue prospective purchaser may be permitted to examine...
Ciccotti, Giovanni; Meloni, Simone
2011-04-07
We introduce a new method to simulate the physics of rare events. The method, an extension of the Temperature Accelerated Molecular Dynamics, comes in use when the collective variables introduced to characterize the rare events are either non-analytical or so complex that computing their derivative is not practical. We illustrate the functioning of the method by studying the homogeneous crystallization in a sample of Lennard-Jones particles. The process is studied by introducing a new collective variable that we call Effective Nucleus Size N. We have computed the free energy barriers and the size of critical nucleus, which result in agreement with data available in the literature. We have also performed simulations in the liquid domain of the phase diagram. We found a free energy curve monotonically growing with the nucleus size, consistent with the liquid domain.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...
NASA Astrophysics Data System (ADS)
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... include with the retained sample the test result for benzene as conducted pursuant to § 80.46(e). (b... sample the test result for benzene as conducted pursuant to § 80.47....
40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Practice for Manual Sampling of Petroleum and Petroleum Products.” (ii) Samples collected under the... present that could affect the sulfur test result. (2) Automatic sampling of petroleum products in..., entitled “Standard Practice for Automatic Sampling of Petroleum and Petroleum Products.” (c) Test...
Balokovic, M.; Smolcic, V.; Ivezic, Z.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.
2012-11-01
We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.
Fontanot, Marco; Iacumin, Lucilla; Cecchini, Francesca; Comi, Giuseppe; Manzano, Marisa
2014-10-01
The detection of Campylobacter, the most commonly reported cause of foodborne gastroenteritis in the European Union, is very important for human health. The most commonly recognised risk factor for infection is the handling and/or consumption of undercooked poultry meat. The methods typically applied to evaluate the presence/absence of Campylobacter in food samples are direct plating and/or enrichment culture based on the Horizontal Method for Detection and Enumeration of Campylobacter spp. (ISO 10272-1B: 2006) and PCR. Molecular methods also allow for the detection of cells that are viable but cannot be cultivated on agar media and that decrease the time required for species identification. The current study proposes the use of two molecular methods for species identification: dot blot and PCR. The dot blot method had a sensitivity of 25 ng for detection of DNA extracted from a pure culture using a digoxigenin-labelled probe for hybridisation; the target DNA was extracted from the enrichment broth at 24 h. PCR was performed using a pair of sensitive and specific primers for the detection of Campylobacter jejuni and Campylobacter coli after 24 h of enrichment in Preston broth. The initial samples were contaminated by 5 × 10 C. jejuni cells/g and 1.5 × 10(2)C. coli cells/g, thus the number of cells present in the enrichment broth at 0 h was 1 or 3 cell/g, respectively.
NASA Astrophysics Data System (ADS)
Mahlen, N. J.; Beard, B. L.; Johnson, C. M.; Lapen, T. J.
2005-12-01
Lu-Hf geochronology has gained attention due to its potential for precisely determining the age of garnet growth in a wide variety of rocks. A unique aspect of Lu-Hf analysis, however, is the disparate chemical behavior of Hf and Lu. For example, Hf is soluble in HF and Lu is soluble in HCl, which can create problems for spike-sample equilibration during dissolution as discussed by Unruh et al. 1984 JGR 89:B459 and later by Beard et al. 1998 GCA 62:525. Although partial dissolution may appear as an attractive means to preferentially dissolve garnet relative to refractory inclusions such as rutile and zircon, our investigations have shown that incomplete spike-sample equilibration may occur in such approaches. This leads to erroneous Lu and Hf contents that can adversely affect Lu-Hf isochron ages and calculated initial Hf isotope compositions. Dissolution of whole-rock powders using hot plates (low-pressure) or short-term microwave dissolution may produce inaccurate Lu-Hf isotope and concentration results, whereas high-temperature and -pressure dissolution in traditional Teflon steel-jacketed (Parr) bombs produces precise and accurate results. The greatest disparity in Lu-Hf isotope and concentration systematics of whole-rock powders among dissolution methods occurs for zircon- and garnet-bearing samples. In contrast, Sm-Nd isotope results are not affected by these different dissolution methods. Lu-Hf isochrons involving garnet may be affected by the dissolution method in a manner similar to that observed for whole-rock powders. Incomplete dissolution of garnet generally increases the measured Lu/Hf ratios, potentially increasing the precision of the isochron. In a number of lithologies, however, including garnet-bearing eclogites and amphibolites, significant errors may be introduced in the Lu-Hf age using hot plates (low-pressure) or short-term microwave dissolution, as compared to those obtained using high-temperature and -pressure dissolution bombs. These
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Jenkins, Mark David; Barrie, Peter; Buggy, Tom; Morison, Gordon
2016-12-08
The focus of this paper is a novel object tracking algorithm which combines an incrementally updated subspace-based appearance model, reconstruction error likelihood function and a two stage selective sampling importance resampling particle filter with motion estimation through autoregressive filtering techniques. The primary contribution of this paper is the use of multiple bags of subspaces with which we aim to tackle the issue of appearance model update. The use of a multibag approach allows our algorithm to revert to a previously successful appearance model in the event that the primary model fails. The aim of this is to eliminate tracker drift by undoing updates to the model that lead to error accumulation and to redetect targets after periods of occlusion by removing the subspace updates carried out during the period of occlusion. We compare our algorithm with several state-of-the-art methods and test on a range of challenging, publicly available image sequences. Our findings indicate a significant robustness to drift and occlusion as a result of our multibag approach and results show that our algorithm competes well with current state-of-the-art algorithms.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Lakkaraju, Sirish Kaushik; Raman, E Prabhu; Yu, Wenbo; MacKerell, Alexander D
2014-06-10
Solute sampling of explicit bulk-phase aqueous environments in grand canonical (GC) ensemble simulations suffer from poor convergence due to low insertion probabilities of the solutes. To address this, we developed an iterative procedure involving Grand Canonical-like Monte Carlo (GCMC) and molecular dynamics (MD) simulations. Each iteration involves GCMC of both the solutes and water followed by MD, with the excess chemical potential (μex) of both the solute and the water oscillated to attain their target concentrations in the simulation system. By periodically varying the μex of the water and solutes over the GCMC-MD iterations, solute exchange probabilities and the spatial distributions of the solutes improved. The utility of the oscillating-μex GCMC-MD method is indicated by its ability to approximate the hydration free energy (HFE) of the individual solutes in aqueous solution as well as in dilute aqueous mixtures of multiple solutes. For seven organic solutes: benzene, propane, acetaldehyde, methanol, formamide, acetate, and methylammonium, the average μex of the solutes and the water converged close to their respective HFEs in both 1 M standard state and dilute aqueous mixture systems. The oscillating-μex GCMC methodology is also able to drive solute sampling in proteins in aqueous environments as shown using the occluded binding pocket of the T4 lysozyme L99A mutant as a model system. The approach was shown to satisfactorily reproduce the free energy of binding of benzene as well as sample the functional group requirements of the occluded pocket consistent with the crystal structures of known ligands bound to the L99A mutant as well as their relative binding affinities.
GeoLab Concept: The Importance of Sample Selection During Long Duration Human Exploration Mission
NASA Technical Reports Server (NTRS)
Calaway, M. J.; Evans, C. A.; Bell, M. S.; Graff, T. G.
2011-01-01
In the future when humans explore planetary surfaces on the Moon, Mars, and asteroids or beyond, the return of geologic samples to Earth will be a high priority for human spaceflight operations. All future sample return missions will have strict down-mass and volume requirements; methods for in-situ sample assessment and prioritization will be critical for selecting the best samples for return-to-Earth.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements...
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements...
Harry V., Jr. Wiant; Michael L. Spangler; John E. Baumgras
2002-01-01
Various taper systems and the centroid method were compared to unbiased volume estimates made by importance sampling for 720 hardwood trees selected throughout the state of West Virginia. Only the centroid method consistently gave volumes estimates that did not differ significantly from those made by importance sampling, although some taper equations did well for most...
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of certified ethanol denaturant. 80.1644 Section 80.1644 Protection of Environment... ethanol denaturant. (a) Sample and test each batch of certified ethanol denaturant. (1) Producers and importers of certified ethanol denaturant shall collect a representative sample from each batch of...
Tarquini, Gabriele; Nunziante Cesaro, Stella; Campanella, Luigi
2014-01-01
The application of Fourier Transform InfraRed (FTIR) spectroscopy to the analysis of oil residues in fragments of archeological amphorae (3rd century A.D.) from Monte Testaccio (Rome, Italy) is reported. In order to check the possibility to reveal the presence of oil residues in archeological pottery using microinvasive and\\or not invasive techniques, different approaches have been followed: firstly, FTIR spectroscopy was used to study oil residues extracted from roman amphorae. Secondly, the presence of oil residues was ascertained analyzing microamounts of archeological fragments with the Diffuse Reflectance Infrared Spectroscopy (DRIFT). Finally, the external reflection analysis of the ancient shards was performed without preliminary treatments evidencing the possibility to detect oil traces through the observation of the most intense features of its spectrum. Incidentally, the existence of carboxylate salts of fatty acids was also observed in DRIFT and Reflectance spectra of archeological samples supporting the roman habit of spreading lime over the spoil heaps. The data collected in all steps were always compared with results obtained on purposely made replicas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable. (b) Quality assurance program. The importer must conduct a quality assurance program, as specified in this paragraph (b), for each truck or rail car loading terminal. (1) Quality assurance samples... frequency of the quality assurance sampling and testing must be at least one sample for each 50 of...
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better
Sampling Small Mammals in Southeastern Forests: The Importance of Trapping in Trees
Loeb, S.C.; Chapman, G.L.; Ridley, T.R.
1999-01-01
We investigated the effect of sampling methodology on the richness and abundance of small mammal communities in loblolly pine forests. Trapping in trees using Sherman live traps was included along with routine ground trapping using the same device. Estimates of species richness did not differ among samples in which tree traps were included or excluded. However, diversity indeces (Shannon-Wiener, Simpson, Shannon and Brillouin) were strongly effected. The indeces were significantly greater than if tree samples were included primarily the result of flying squirrel captures. Without tree traps, the results suggested that cotton mince dominated the community. We recommend that tree traps we included in sampling.
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this..., 2015, to determine its benzene concentration for compliance with the requirements of this...
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...
Bandpass Sampling--An Opportunity to Stress the Importance of In-Depth Understanding
ERIC Educational Resources Information Center
Stern, Harold P. E.
2010-01-01
Many bandpass signals can be sampled at rates lower than the Nyquist rate, allowing significant practical advantages. Illustrating this phenomenon after discussing (and proving) Shannon's sampling theorem provides a valuable opportunity for an instructor to reinforce the principle that innovation is possible when students strive to have a complete…
Important issues related to using pooled samples for environmental chemical biomonitoring.
Caudill, Samuel P
2011-02-28
Pooling samples for analysis was first proposed in the 1940s to reduce analytical measurement costs associated with screening World War II recruits for syphilis. Later, it progressed to more complex screening strategies, to population prevalence estimation for discrete quantities, and to population mean estimation for continuous quantities. Recently, pooled samples have also been used to provide efficient alternatives for gene microarray analyses, epidemiologic studies of biomarkers of exposure, and characterization of populations regarding environmental chemical exposures. In this study, we address estimation and bias issues related to using pooled-sample variance information from an auxiliary source to augment pooled-sample variance estimates from the study of interest. The findings are illustrated by using pooled samples from the National Health and Nutrition Examination Survey 2001-2002 to assess exposures to perfluorooctanesulfonate and other polyfluoroalkyl compounds in the U.S. population. Published in 2011 by John Wiley & Sons, Ltd.
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel...; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Sampling and Testing § 80.583 What... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the...
The Importance of Sample Processing in Analysis of Asbestos Content in Rocks and Soils
NASA Astrophysics Data System (ADS)
Neumann, R. D.; Wright, J.
2012-12-01
Analysis of asbestos content in rocks and soils using Air Resources Board (ARB) Test Method 435 (M435) involves the processing of samples for subsequent analysis by polarized light microscopy (PLM). The use of different equipment and procedures by commercial laboratories to pulverize rock and soil samples could result in different particle size distributions. It has long been theorized that asbestos-containing samples can be over-pulverized to the point where the particle dimensions of the asbestos no longer meet the required 3:1 length-to-width aspect ratio or the particles become so small that they no longer can be tested for optical characteristics using PLM where maximum PLM magnification is typically 400X. Recent work has shed some light on this issue. ARB staff conducted an interlaboratory study to investigate variability in preparation and analytical procedures used by laboratories performing M435 analysis. With regard to sample processing, ARB staff found that different pulverization equipment and processing procedures produced powders that have varying particle size distributions. PLM analysis of the finest powders produced by one laboratory showed all but one of the 12 samples were non-detect or below the PLM reporting limit; in contrast to the other 36 coarser samples from the same field sample and processed by three other laboratories where 21 samples were above the reporting limit. The set of 12, exceptionally fine powder samples produced by the same laboratory was re-analyzed by transmission electron microscopy (TEM) and results showed that these samples contained asbestos above the TEM reporting limit. However, the use of TEM as a stand-alone analytical procedure, usually performed at magnifications between 3,000 to 20,000X, also has its drawbacks because of the miniscule mass of sample that this method examines. The small amount of powder analyzed by TEM may not be representative of the field sample. The actual mass of the sample powder analyzed by
2015-11-24
This image from NASA 2001 Mars Odyssey spacecraft shows the northern margin of Tanaica Montes. These hills are cut by fractures, which are in alignment with the regional trend of tectonic faulting found east of Alba Mons. Orbit Number: 61129 Latitude: 40.1468 Longitude: 269.641 Instrument: VIS Captured: 2015-09-25 03:03
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
Tian, Zhen; Li, Yongbao; Hassan-Rezaeian, Nima; Jiang, Steve B; Jia, Xun
2017-03-01
We have previously developed a GPU-based Monte Carlo (MC) dose engine on the OpenCL platform, named goMC, with a built-in analytical linear accelerator (linac) beam model. In this paper, we report our recent improvement on goMC to move it toward clinical use. First, we have adapted a previously developed automatic beam commissioning approach to our beam model. The commissioning was conducted through an optimization process, minimizing the discrepancies between calculated dose and measurement. We successfully commissioned six beam models built for Varian TrueBeam linac photon beams, including four beams of different energies (6 MV, 10 MV, 15 MV, and 18 MV) and two flattening-filter-free (FFF) beams of 6 MV and 10 MV. Second, to facilitate the use of goMC for treatment plan dose calculations, we have developed an efficient source particle sampling strategy. It uses the pre-generated fluence maps (FMs) to bias the sampling of the control point for source particles already sampled from our beam model. It could effectively reduce the number of source particles required to reach a statistical uncertainty level in the calculated dose, as compared to the conventional FM weighting method. For a head-and-neck patient treated with volumetric modulated arc therapy (VMAT), a reduction factor of ~2.8 was achieved, accelerating dose calculation from 150.9 s to 51.5 s. The overall accuracy of goMC was investigated on a VMAT prostate patient case treated with 10 MV FFF beam. 3D gamma index test was conducted to evaluate the discrepancy between our calculated dose and the dose calculated in Varian Eclipse treatment planning system. The passing rate was 99.82% for 2%/2 mm criterion and 95.71% for 1%/1 mm criterion. Our studies have demonstrated the effectiveness and feasibility of our auto-commissioning approach and new source sampling strategy for fast and accurate MC dose calculations for treatment plans.
Fetal blood sampling in baboons (Papio spp.): important procedural aspects and literature review.
Joy, S D; O'Shaughnessy, R; Schlabritz-Loutsevitch, N; Leland, M M; Frost, P; Fan-Havard, P
2009-06-01
The baboons (Papio cynocephalus) have similarities with human placentation and fetal development. Fetal blood sampling allows investigators to assess fetal condition at a specific point in gestation as well as transplacental transfer of medications. Unfortunately, assessing fetal status during gestation has been difficult and fetal instrumentation associated with high rate of pregnancy loss. Our objectives are to describe the technique of ultrasound guided cordocentesis (UGC) in baboons, report post-procedural outcomes, and review existing publications. This is a procedural paper describing the technique of UGC in baboons. After confirming pregnancy and gestational age via ultrasound, animals participating in approved research protocols that required fetal assessment underwent UGC. We successfully performed UGC in four animals (five samples) using this technique. Animals were sampled in the second and third trimesters with fetal blood sampling achieved by sampling a free cord loop, placental cord insertion site or the intrahepatic umbilical vein. All procedures were without complication and these animals delivered at term. Ultrasound guided fetal umbilical cord venipuncture is a useful and safe technique to sample the fetal circulation with minimal risk to the fetus or mother. We believe this technique could be used for repeated fetal venous blood sampling in the baboons.
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Report #17-P-0412, September 28, 2017. Low rates of inspections and sampling can create a risk that the EPA may not be identifying and deterring the import of pesticides harmful to people or the environment.
2015-09-22
This VIS image shows where an impact created a crater on top of a group of ridges called Tanaica Montes. The slightly out-of-round shape and the distribution of the ejecta was likely all due to the pre-existing landforms. Orbit Number: 60555 Latitude: 39.6442 Longitude: 268.824 Instrument: VIS Captured: 2015-08-08 20:37 http://photojournal.jpl.nasa.gov/catalog/PIA19780
[Proportion of aflatoxin B1 contaminated kernels and its concentration in imported peanut samples].
Hirano, S; Shima, T; Shimada, T
2001-08-01
Moldy and split peanut kernels were separated from peanuts exported from Brazil, Sudan, India and Taiwan by visual inspection. The remaining peanuts from Brazil, Sudan and India were roasted lightly and the skins were removed. Stained peanuts were separated from the others. Aflatoxin was detected in moldy and stained peanuts. There was a positive correlation between % of aflatoxin-contaminated peanut kernels and aflatoxin B1 concentration in whole samples. Aflatoxin concentration of moldy peanuts was higher than that of stained peanut kernels.
Importance of sampling design and analysis in animal population studies: a comment on Sergio et al
Kery, M.; Royle, J. Andrew; Schmid, Hans
2008-01-01
1. The use of predators as indicators and umbrellas in conservation has been criticized. In the Trentino region, Sergio et al. (2006; hereafter SEA) counted almost twice as many bird species in quadrats located in raptor territories than in controls. However, SEA detected astonishingly few species. We used contemporary Swiss Breeding Bird Survey data from an adjacent region and a novel statistical model that corrects for overlooked species to estimate the expected number of bird species per quadrat in that region. 2. There are two anomalies in SEA which render their results ambiguous. First, SEA detected on average only 6.8 species, whereas a value of 32 might be expected. Hence, they probably overlooked almost 80% of all species. Secondly, the precision of their mean species counts was greater in two-thirds of cases than in the unlikely case that all quadrats harboured exactly the same number of equally detectable species. This suggests that they detected consistently only a biased, unrepresentative subset of species. 3. Conceptually, expected species counts are the product of true species number and species detectability p. Plenty of factors may affect p, including date, hour, observer, previous knowledge of a site and mobbing behaviour of passerines in the presence of predators. Such differences in p between raptor and control quadrats could have easily created the observed effects. Without a method that corrects for such biases, or without quantitative evidence that species detectability was indeed similar between raptor and control quadrats, the meaning of SEA's counts is hard to evaluate. Therefore, the evidence presented by SEA in favour of raptors as indicator species for enhanced levels of biodiversity remains inconclusive. 4. Synthesis and application. Ecologists should pay greater attention to sampling design and analysis in animal population estimation. Species richness estimation means sampling a community. Samples should be representative for the
Sampling small mammals in southeastern forests: the importance of trapping in trees
Susan C. Loeb; Gregg L. Chapman; Theodore R. Ridley
2001-01-01
Because estimates of small mammal species richness and diversity are strongly influenced by sampling methodology, 2 or more trap types are often used in studies of small mammal communities. However, in most cases, all traps are placed at ground level. In contrast, we used Sherman live traps placed at 1.5 m in trees in addition to Sherman live traps and Mosby box traps...
Jan, M Rasul; Shah, Jasmin; Muhammad, Mian; Ara, Behisht
2009-09-30
A simple selective spectrophotometric method has been developed for the determination of glyphosate herbicide in environmental and biological samples. Glyphosate was reacted with carbon disulphide to form dithiocarbamic acid which was further followed for complex formation with copper in the presence of ammonia. The absorbance of the resulting yellow coloured copper dithiocarbamate complex was measured at 435 nm with molar absorptivity of 1.864 x 10(3) L mol(-1)cm(-1).The analytical parameters were optimized and Beer's law was obeyed in the range of 1.0-70 microg mL(-1). The composition ratio of the complex was glyphosate: copper (2:1) as established by Job's method with a formation constant of 1.06 x 10(5). Glyphosate was satisfactorily determined with limit of detection and quantification of 1.1 and 3.7 microg mL(-1), respectively. The investigated method was applied successfully to the environmental samples. Recovery values in soil, wheat grains and water samples were found to be 80.0+/-0.46 to 87.0+/-0.28%, 95.0+/-0.88 to 102.0+/-0.98% and 85.0+/-0.68 to 92.0+/-0.37%, respectively.
Determining the relative importance of soil sample locations to predict risk of child lead exposure.
Zahran, Sammy; Mielke, Howard W; McElmurry, Shawn P; Filippelli, Gabriel M; Laidlaw, Mark A S; Taylor, Mark P
2013-10-01
Soil lead in urban neighborhoods is a known predictor of child blood lead levels. In this paper, we address the question where one ought to concentrate soil sample collection efforts to efficiently predict children at-risk for soil Pb exposure. Two extensive data sets are combined, including 5467 surface soil samples collected from 286 census tracts, and geo-referenced blood Pb data for 55,551 children in metropolitan New Orleans, USA. Random intercept least squares, random intercept logistic, and quantile regression results indicate that soils collected within 1m adjacent to residential streets most reliably predict child blood Pb outcomes in child blood Pb levels. Regression decomposition results show that residential street soils account for 39.7% of between-neighborhood explained variation, followed by busy street soils (21.97%), open space soils (20.25%), and home foundation soils (18.71%). Just as the age of housing stock is used as a statistical shortcut for child risk of exposure to lead-based paint, our results indicate that one can shortcut the characterization of child risk of exposure to neighborhood soil Pb by concentrating sampling efforts within 1m and adjacent to residential and busy streets, while significantly reducing the total costs of collection and analysis. This efficiency gain can help advance proactive upstream, preventive methods of environmental Pb discovery.
Code of Federal Regulations, 2013 CFR
2013-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Code of Federal Regulations, 2011 CFR
2011-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Code of Federal Regulations, 2014 CFR
2014-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Smith, R.L.; Harvey, R.W.; LeBlanc, D.R.
1991-01-01
Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, U.S.A. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume. A 27-fold change in bacterial abundance; a 35-fold change in frequency of dividing cells (FDC), an indicator of bacterial growth; a 23-fold change in 3H-glucose uptake, a measure of heterotrophic activity; and substantial changes in overall cell morphology were evident within a 9-m vertical interval at 250 m downgradient. The existence of these gradients argues for the need for closely spaced vertical sampling in groundwater studies because small differences in the vertical placement of a well screen can lead to incorrect conclusions about the chemical and microbiological processes within an aquifer.Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, USA. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume
Glisovic, Sanja; Eintracht, Shaun; Longtin, Yves; Oughton, Matthew; Brukner, Ivan
2017-08-08
Rectal swabs are routinely used by public health authorities to screen for multi-drug resistant enteric bacteria including vancomycin-resistant enterococci (VRE) and carbapenem-resistant enterobacteriaceae (CRE). Screening sensitivity can be influenced by the quality of the swabbing, whether performed by the patient (self-swabbing) or a healthcare practitioner. One common exclusion criterion for rectal swabs is absence of "visible soiling" from fecal matter. In our institution, this criterion excludes almost 10% of rectal swabs received in the microbiology laboratory. Furthermore, over 30% of patients in whom rectal swabs are cancelled will not be re-screened within the next 48h, resulting in delays in removing infection prevention measures. We describe two quantitative polymerase chain reaction (qPCR)-based assays, human RNAse P and eubacterial 16S rDNA, which might serve as suitable controls for sampling adequacy. However, lower amounts of amplifiable human DNA make the 16s rDNA assay a better candidate for sample adequacy control. Copyright © 2017. Published by Elsevier Ltd.
Bassuino, Daniele M; Konradt, Guilherme; Cruz, Raquel A S; Silva, Gustavo S; Gomes, Danilo C; Pavarini, Saulo P; Driemeier, David
2016-07-01
Twenty-six cattle and 7 horses were diagnosed with rabies. Samples of brain and spinal cord were processed for hematoxylin and eosin staining and immunohistochemistry (IHC). In addition, refrigerated fragments of brain and spinal cord were tested by direct fluorescent antibody test and intracerebral inoculation in mice. Statistical analyses and Fisher exact test were performed by commercial software. Histologic lesions were observed in the spinal cord in all of the cattle and horses. Inflammatory lesions in horses were moderate at the thoracic, lumbar, and sacral levels, and marked at the lumbar enlargement level. Gitter cells were present in large numbers in the lumbar enlargement region. IHC staining intensity ranged from moderate to strong. Inflammatory lesions in cattle were moderate in all spinal cord sections, and gitter cells were present in small numbers. IHC staining intensity was strong in all spinal cord sections. Only 2 horses exhibited lesions in the brain, which were located mainly in the obex and cerebellum; different from that observed in cattle, which had lesions in 25 cases. Fisher exact test showed that the odds of detecting lesions caused by rabies in horses are 3.5 times higher when spinal cord sections are analyzed, as compared to analysis of brain samples alone.
Non-specific interference in the measurement of plasma ammonia: importance of using a sample blank.
Herrera, Daniel Juan; Hutchin, Tim; Fullerton, Donna; Gray, George
2010-01-01
Enzymatic assays using glutamate dehydrogenase (GLDH) to monitor the transformation of NAD(P)H to NAD(P)(+) by a spectrophotometric technique are the most common methods to measure plasma ammonia (PA) in routine laboratories worldwide. However, these assays can potentially be subject to interference by substances in plasma able to oxidize NAD(P)H at a substantial rate, thereby providing falsely high results. To study this potential interference, we spiked a plasma pool with a liver homogenate and measured the ammonia concentration using a dry chemistry system (Vitros 250, Ortho Clinical Diagnostic, Raritan, NJ, USA), an enzymatic assay without a sample blanking step (Infinity Ammonia Liquid Stable Reagent, Thermo Fisher Scientific, Waltham, USA) and an enzymatic assay that corrects for the non-specific oxidation of NADPH (Ammonia kit, RANDOX Laboratories Ltd, Crumlin, UK). This experiment shows that the Infinity ammonia reagent kit is subject to a clinically significant interference and explains the discrepancies previously reported between these methods in patients with acute liver failure (ALF). When using enzymatic methods for the assessment of PA, we recommend including a sample blanking correction and this should be mandatory when monitoring patients with ALF.
Kranz, Thorsten M; Harroch, Sheila; Manor, Orly; Lichtenberg, Pesach; Friedlander, Yechiel; Seandel, Marco; Harkavy-Friedman, Jill; Walsh-Messinger, Julie; Dolgalev, Igor; Heguy, Adriana; Chao, Moses V; Malaspina, Dolores
2015-08-01
Schizophrenia is a debilitating syndrome with high heritability. Genomic studies reveal more than a hundred genetic variants, largely nonspecific and of small effect size, and not accounting for its high heritability. De novo mutations are one mechanism whereby disease related alleles may be introduced into the population, although these have not been leveraged to explore the disease in general samples. This paper describes a framework to find high impact genes for schizophrenia. This study consists of two different datasets. First, whole exome sequencing was conducted to identify disruptive de novo mutations in 14 complete parent-offspring trios with sporadic schizophrenia from Jerusalem, which identified 5 sporadic cases with de novo gene mutations in 5 different genes (PTPRG, TGM5, SLC39A13, BTK, CDKN3). Next, targeted exome capture of these genes was conducted in 48 well-characterized, unrelated, ethnically diverse schizophrenia cases, recruited and characterized by the same research team in New York (NY sample), which demonstrated extremely rare and potentially damaging variants in three of the five genes (MAF<0.01) in 12/48 cases (25%); including PTPRG (5 cases), SCL39A13 (4 cases) and TGM5 (4 cases), a higher number than usually identified by whole exome sequencing. Cases differed in cognition and illness features based on which mutation-enriched gene they carried. Functional de novo mutations in protein-interaction domains in sporadic schizophrenia can illuminate risk genes that increase the propensity to develop schizophrenia across ethnicities.
Whitaker, Thomas B; Saltsman, Joyce J; Ware, George M; Slate, Andrew B
2007-01-01
Hypoglycin A (HGA) is a toxic amino acid that is naturally produced in unripe ackee fruit. In 1973, the U.S. Food and Drug Administration (FDA) placed a worldwide import alert on ackee fruit, which banned the product from entering the United States. The FDA has considered establishing a regulatory limit for HGA and lifting the ban, which will require development of a monitoring program. The establishment of a regulatory limit for HGA requires the development of a scientifically based sampling plan to detect HGA in ackee fruit imported into the United States. Thirty-three lots of ackee fruit were sampled according to an experimental protocol in which 10 samples, i.e., ten 19 oz cans, were randomly taken from each lot and analyzed for HGA by using liquid chromatography. The total variance was partitioned into sampling and analytical variance components, which were found to be a function of the HGA concentration. Regression equations were developed to predict the total, sampling, and analytical variances as a function of HGA concentration. The observed HGA distribution among the test results for the 10 HGA samples was compared with the normal and lognormal distributions. A computer model based on the lognormal distribution was developed to predict the performance of sampling plan designs to detect HGA in ackee fruit shipments. The performance of several sampling plan designs was evaluated to demonstrate how to manipulate sample size and accept/reject limits to reduce misclassification of ackee fruit lots.
Omer, Hélène; McDowell, Andrew; Alexeyev, Oleg A
Acne vulgaris is a chronic inflammatory skin condition classified by the Global Burden of Disease Study as the eighth most prevalent disease worldwide. The pathophysiology of the condition has been extensively studied, with an increase in sebum production, abnormal keratinization of the pilosebaceous follicle, and an inflammatory immune response all implicated in its etiology. One of the most disputed points, however, is the role of the gram-positive anaerobic bacterium Propionibacterium acnes in the development of acne, particularly when this organism is also found in normal sebaceous follicles of healthy skin. Against this background, we now describe the different sampling strategies that have been adopted for qualitative and quantitative study of P acnes within intact hair follicles of the skin and discuss the strengths and weaknesses of such methodologies for investigating the role of P acnes in the development of acne. Copyright © 2016 Elsevier Inc. All rights reserved.
Importance of long-time simulations for rare event sampling in zinc finger proteins.
Godwin, Ryan; Gmeiner, William; Salsbury, Freddie R
2016-01-01
Molecular dynamics (MD) simulation methods have seen significant improvement since their inception in the late 1950s. Constraints of simulation size and duration that once impeded the field have lessened with the advent of better algorithms, faster processors, and parallel computing. With newer techniques and hardware available, MD simulations of more biologically relevant timescales can now sample a broader range of conformational and dynamical changes including rare events. One concern in the literature has been under which circumstances it is sufficient to perform many shorter timescale simulations and under which circumstances fewer longer simulations are necessary. Herein, our simulations of the zinc finger NEMO (2JVX) using multiple simulations of length 15, 30, 1000, and 3000 ns are analyzed to provide clarity on this point.
The importance of measuring and accounting for potential biases in respondent-driven samples.
Rudolph, Abby E; Fuller, Crystal M; Latkin, Carl
2013-07-01
Respondent-driven sampling (RDS) is often viewed as a superior method for recruiting hard-to-reach populations disproportionately burdened with poor health outcomes. As an analytic approach, it has been praised for its ability to generate unbiased population estimates via post-stratified weights which account for non-random recruitment. However, population estimates generated with RDSAT (RDS Analysis Tool) are sensitive to variations in degree weights. Several assumptions are implicit in the degree weight and are not routinely assessed. Failure to meet these assumptions could result in inaccurate degree measures and consequently result in biased population estimates. We highlight potential biases associated with violating the assumptions implicit in degree weights for the RDSAT estimator and propose strategies to measure and possibly correct for biases in the analysis.
Podjasek, Joshua O; Cook-Norris, Robert H; Richardson, Donna M; Drage, Lisa A; Davis, Mark D P
2011-01-01
Exotic woods from tropical and subtropical regions (eg, from South America, south Asia, and Africa) frequently are used occupationally and recreationally by woodworkers and hobbyists. These exotic woods more commonly provoke irritant contact dermatitis reactions, but they also can provoke allergic contact dermatitis reactions. We report three patients seen at Mayo Clinic (Rochester, MN) with allergic contact dermatitis reactions to exotic woods. Patch testing was performed and included patient-provided wood samples. Avoidance of identified allergens was recommended. For all patients, the dermatitis cleared or improved after avoidance of the identified allergens. Clinicians must be aware of the potential for allergic contact dermatitis reactions to compounds in exotic woods. Patch testing should be performed with suspected woods for diagnostic confirmation and allowance of subsequent avoidance of the allergens.
The importance of measuring and accounting for potential biases in respondent-driven samples
Rudolph, Abby E.; Fuller, Crystal M.; Latkin, Carl
2013-01-01
Respondent-driven sampling (RDS) is often viewed as a superior method for recruiting hard-to-reach populations disproportionately burdened with poor health outcomes. As an analytic approach, it has been praised for its ability to generate unbiased population estimates via post-stratified weights which account for non-random recruitment. However, population estimates generated with RDSAT (RDS Analysis Tool) are sensitive to variations in degree weights. Several assumptions are implicit in the degree weight and are not routinely assessed. Failure to meet these assumptions could result in inaccurate degree measures and consequently result in biased population estimates. We highlight potential biases associated with violating the assumptions implicit in degree weights for the RDSAT estimator and propose strategies to measure and possibly correct for biases in the analysis. PMID:23515641
Chlamydophila pneumoniae diagnostics: importance of methodology in relation to timing of sampling.
Hvidsten, D; Halvorsen, D S; Berdal, B P; Gutteberg, T J
2009-01-01
The diagnostic impact of PCR-based detection was compared to single-serum IgM antibody measurement and IgG antibody seroconversion during an outbreak of Chlamydophila pneumoniae in a military community. Nasopharyngeal swabs for PCR-based detection, and serum, were obtained from 127 conscripts during the outbreak. Serum, drawn many months before the outbreak, provided the baseline antibody status. C. pneumoniae IgM and IgG antibodies were assayed using microimmunofluorescence (MIF), enzyme immunoassay (EIA) and recombinant ELISA (rELISA). Two reference standard tests were applied: (i) C. pneumoniae PCR; and (ii) assay of C. pneumoniae IgM antibodies, defined as positive if >or=2 IgM antibody assays (i.e. rELISA with MIF and/or EIA) were positive. In 33 subjects, of whom two tested negative according to IgM antibody assays and IgG seroconversion, C. pneumoniae DNA was detected by PCR. The sensitivities were 79%, 85%, 88% and 68%, respectively, and the specificities were 86%, 84%, 78% and 93%, respectively, for MIF IgM, EIA IgM, rELISA IgM and PCR. In two subjects, acute infection was diagnosed on the basis of IgG antibody seroconversion alone. The sensitivity of PCR detection was lower than that of any IgM antibody assay. This may be explained by the late sampling, or clearance of the organism following antibiotic treatment. The results of assay evaluation studies are affected not only by the choice of reference standard tests, but also by the timing of sampling for the different test principles used. On the basis of these findings, a combination of nasopharyngeal swabbing for PCR detection and specific single-serum IgM measurement is recommended in cases of acute respiratory C. pneumoniae infection.
2002-11-23
This image by NASA Mars Odyssey spacecraft shows the rugged cratered highland region of Libya Montes, which forms part of the rim of an ancient impact basin called Isidis. This region of the highlands is fairly dissected with valley networks. There is still debate within the scientific community as to how valley networks themselves form: surface runoff (rainfall/snowmelt) or headward erosion via groundwater sapping. The degree of dissection here in this region suggests surface runoff rather than groundwater sapping. Small dunes are also visible on the floors of some of these channels. http://photojournal.jpl.nasa.gov/catalog/PIA04008
A Modified Trap for Adult Sampling of Medically Important Flies (Insecta: Diptera)
Akbarzadeh, Kamran; Rafinejad, Javad; Nozari, Jamasb; Rassi, Yavar; Sedaghat, Mohammad Mehdi; Hosseini, Mostafa
2012-01-01
Background: Bait-trapping appears to be a generally useful method of studying fly populations. The aim of this study was to construct a new adult flytrap by some modifications in former versions and to evaluate its applicability in a subtropical zone in southern Iran. Methods: The traps were constructed with modification by adding some equipment to a polyethylene container (18× 20× 33 cm) with lid. The fresh sheep meat was used as bait. Totally 27 adult modified traps were made and tested for their efficacies to attract adult flies. The experiment was carried out in a range of different topographic areas of Fars Province during June 2010. Results: The traps were able to attract various groups of adult flies belonging to families of: Calliphoridae, Sarcophagidae, Muscidae, and Faniidae. The species of Calliphora vicina (Diptera: Calliphoridae), Sarcophaga argyrostoma (Diptera: Sarcophagidae) and Musca domestica (Diptera: Muscidae) include the majority of the flies collected by this sheep-meat baited trap. Conclusion: This adult flytrap can be recommended for routine field sampling to study diversity and population dynamics of flies where conducting of daily collection is difficult. PMID:23378969
Importance of Numeracy as a Risk Factor for Elder Financial Exploitation in a Community Sample.
Wood, Stacey A; Liu, Pi-Ju; Hanoch, Yaniv; Estevez-Cores, Sara
2016-11-01
To examine the role of numeracy, or comfort with numbers, as a potential risk factor for financial elder exploitation in a community sample. Individually administered surveys were given to 201 independent, community-dwelling adults aged 60 and older. Risk for financial elder exploitation was assessed using the Older Adult Financial Exploitation Measure (OAFEM). Other variables of interest included numeracy, executive functioning, and other risk factors identified from the literature. Assessments were completed individually at the Wood Lab at Scripps College in Claremont, CA and neighboring community centers. After controlling for other variables, including education, lower numeracy was related to higher scores on the OAFEM consistent with higher risk for financial exploitation. Self-reported physical and mental health, male gender, and younger age were also related to increased risk. Results indicated that numeracy is a significant risk factor for elder financial exploitation after controlling for other commonly reported variables. These findings are consistent with the broader literature relating numeracy to wealth and debt levels and extend them to the area of elder financial exploitation. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Importance of sample size for the estimation of repeater F waves in amyotrophic lateral sclerosis.
Fang, Jia; Liu, Ming-Sheng; Guan, Yu-Zhou; Cui, Bo; Cui, Li-Ying
2015-02-20
In amyotrophic lateral sclerosis (ALS), repeater F waves are increased. Accurate assessment of repeater F waves requires an adequate sample size. We studied the F waves of left ulnar nerves in ALS patients. Based on the presence or absence of pyramidal signs in the left upper limb, the ALS patients were divided into two groups: One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group. The Index repeating neurons (RN) and Index repeater F waves (Freps) were compared among the P, NP and control groups following 20 and 100 stimuli respectively. For each group, the Index RN and Index Freps obtained from 20 and 100 stimuli were compared. In the P group, the Index RN (P = 0.004) and Index Freps (P = 0.001) obtained from 100 stimuli were significantly higher than from 20 stimuli. For F waves obtained from 20 stimuli, no significant differences were identified between the P and NP groups for Index RN (P = 0.052) and Index Freps (P = 0.079); The Index RN (P < 0.001) and Index Freps (P < 0.001) of the P group were significantly higher than the control group; The Index RN (P = 0.002) of the NP group was significantly higher than the control group. For F waves obtained from 100 stimuli, the Index RN (P < 0.001) and Index Freps (P < 0.001) of the P group were significantly higher than the NP group; The Index RN (P < 0.001) and Index Freps (P < 0.001) of the P and NP groups were significantly higher than the control group. Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS. For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy, 100 stimuli would be required.
Sofaer, Helen; Jarnevich, Catherine S.
2017-01-01
AimThe distributions of exotic species reflect patterns of human-mediated dispersal, species climatic tolerances and a suite of other biotic and abiotic factors. The relative importance of each of these factors will shape how the spread of exotic species is affected by ongoing economic globalization and climate change. However, patterns of trade may be correlated with variation in scientific sampling effort globally, potentially confounding studies that do not account for sampling patterns.LocationGlobal.Time periodMuseum records, generally from the 1800s up to 2015.Major taxa studiedPlant species exotic to the United States.MethodsWe used data from the Global Biodiversity Information Facility (GBIF) to summarize the number of plant species with exotic occurrences in the United States that also occur in each other country world-wide. We assessed the relative importance of trade and climatic similarity for explaining variation in the number of shared species while evaluating several methods to account for variation in sampling effort among countries.ResultsAccounting for variation in sampling effort reversed the relative importance of trade and climate for explaining numbers of shared species. Trade was strongly correlated with numbers of shared U.S. exotic plants between the United States and other countries before, but not after, accounting for sampling variation among countries. Conversely, accounting for sampling effort strengthened the relationship between climatic similarity and species sharing. Using the number of records as a measure of sampling effort provided a straightforward approach for the analysis of occurrence data, whereas species richness estimators and rarefaction were less effective at removing sampling bias.Main conclusionsOur work provides support for broad-scale climatic limitation on the distributions of exotic species, illustrates the need to account for variation in sampling effort in large biodiversity databases, and highlights the
Doolette, David J; Gault, Keith A; Gutvik, Christian R
2014-03-01
In studies of decompression procedures, ultrasonically detected venous gas emboli (VGE) are commonly used as a surrogate outcome if decompression sickness (DCS) is unlikely to be observed. There is substantial variability in observed VGE grades, and studies should be designed with sufficient power to detect an important effect. Data for estimating sample size requirements for studies using VGE as an outcome is provided by a comparison of two decompression schedules that found corresponding differences in DCS incidence (3/192 [DCS/dives] vs. 10/198) and median maximum VGE grade (2 vs. 3, P < 0.0001, Wilcoxon test). Sixty-two subjects dived each schedule at least once, accounting for 183 and 180 man-dives on each schedule. From these data, the frequency with which 10,000 randomly resampled, paired samples of maximum VGE grade were significantly different (paired Wilcoxon test, one-sided P ⋜ 0.05 or 0.025) in the same direction as the VGE grades of the full data set were counted (estimated power). Resampling was also used to estimate power of a Bayesian method that ranks two samples based on DCS risks estimated from the VGE grades. Paired sample sizes of 50 subjects yielded about 80% power, but the power dropped to less than 50% with fewer than 30 subjects. Comparisons of VGE grades that fail to find a difference between paired sample sizes of 30 or fewer must be interpreted cautiously. Studies can be considered well powered if the sample size is 50 even if only a one-grade difference in median VGE grade is of interest.
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site]
This image shows the rugged cratered highland region of Libya Montes. Libya Montes forms part of the rim of an ancient impact basin called Isidis. This region of the highlands is fairly dissected with valley networks. There is still debate within the scientific community as to how valley networks themselves form: surface runoff (rainfall/snowmelt) or headward erosion via groundwater sapping. The degree of dissection here in this region suggests surface runoff rather than groundwater sapping. Small dunes are also visible on the floors of some of these channels.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
Quantum Monte Carlo for Molecules.
1984-11-01
AD-A148 159 QUANTUM MONTE CARLO FOR MOLECULES(U) CALIFORNIA UNIV Y BERKELEY LAWRENCE BERKELEY LAB W A LESTER ET AL. Si NOV 84 NOSUi4-83-F-Oifi...ORG. REPORT NUMBER 00 QUANTUM MONTE CARLO FOR MOLECULES ’Ids 7. AUTHOR(e) S. CONTRACT Or GRANT NUMER(e) William A. Lester, Jr. and Peter J. Reynolds...unlimited. ..’.- • p. . ° 18I- SUPPLEMENTARY NOTES IS. KEY WORDS (Cent/Rue an "Worse aide If noeesean d entlt by block fmamabr) Quantum Monte Carlo importance
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
LMC: Logarithmantic Monte Carlo
NASA Astrophysics Data System (ADS)
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
NASA Astrophysics Data System (ADS)
Muir, J. B.; Tkalcic, H.
2015-12-01
The body wave velocities of the lowermost mantle and the topography of the core mantle boundary are intimately linked, due to physical considerations of temperature and buoyancy, and due to the difficulty of independently resolving their structure. We present a hierarchical Bayesian joint inversion of the P-wave velocity perturbations in the lowermost 300 km of the mantle and the topographic perturbations of the core mantle boundary, using a novel dataset, consisting of PcP - P, PKPab - PKPbc and P4KP - PcP differential travel times. This dataset is both free of the effects of the inner core and largely independent of upper mantle heterogeneity, allowing us to concentrate on the core mantle boundary / lowermost mantle region. We employ a hybrid hierarchical Hamiltonian Monte Carlo (HMC) / Gibbs sampler, to our knowledge thus far unused in global seismology, to generate the posterior parameter distributions arising from Bayesian analysis, using Monte Carlo simulation. The full hierarchical Bayesian approach, using the HMC/Gibbs allows the highly correlated and noise dependent probability surface of the model space to be efficiently traversed. After confirming the efficacy of our sampler on a synthetic dataset, we invert for the lowermost mantle and core mantle boundary. After including corrections to the differential travel time data to account for upper mantle structure, we find a root mean square P-wave velocity perturbation in the lowermost mantle of 1.26% and a root mean square topographic perturbation of the core mantle boundary of 6.04 km.
Macfarlane, Gary J; Jones, Gareth T; Swafe, Leyla; Reid, David M; Basu, Neil
2013-06-01
A common population sampling frame in countries with universal health care is health service registers. We have evaluated the use of such a register, in the United Kingdom, against a commercially available database claiming large population coverage, an alternative that offers ease of access and flexibility of use. A case-control study of vasculitis, which recruited cases from secondary care clinics in Scotland, compared two alternative sampling frames for population controls, namely the registers of National Health Service (NHS) primary care practices and a commercially available database. The characteristics of controls recruited from both sources were compared in addition to separate case-control comparison using logistic regression. A total of 166 of 189 cases participated (88% participation rate), while both the commercial database and NHS Central Register (NHSCR) controls achieved a participation rate of 24% among persons assumed to have received the invitation. On several measures, the NHSCR patients reported poorer health than the commercial database controls: low scores on the physical component score of the Short Form 36 (odds ratio [OR]: 2.3; 95% confidence interval [CI]: 1.3-4.1), chronic widespread pain (OR: 2.3; CI: 1.1-4.7), and high levels of fatigue (OR: 2.0; CI: 1.3-3.1). These had an important influence on the estimates of association with case status with one association (pain) showing a strong and significant association using commercial database controls, which was absent with NHSCR controls. There are important differences in self-reported measures of health and quality of life using controls from two alternative population sampling frames. It emphasizes the importance of methodological rigor and prior assessment in choosing sampling frames for case-control studies. Copyright © 2013 Elsevier Inc. All rights reserved.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
NASA Astrophysics Data System (ADS)
Incerti, S.; Barberet, Ph.; Dévès, G.; Michelet, C.; Francis, Z.; Ivantchenko, V.; Mantero, A.; El Bitar, Z.; Bernal, M. A.; Tran, H. N.; Karamitros, M.; Seznec, H.
2015-09-01
The general purpose Geant4 Monte Carlo simulation toolkit is able to simulate radiative and non-radiative atomic de-excitation processes such as fluorescence and Auger electron emission, occurring after interaction of incident ionising radiation with target atomic electrons. In this paper, we evaluate the Geant4 modelling capability for the simulation of fluorescence spectra induced by 1.5 MeV proton irradiation of thin high-Z foils (Fe, GdF3, Pt, Au) with potential interest for nanotechnologies and life sciences. Simulation results are compared to measurements performed at the Centre d'Etudes Nucléaires de Bordeaux-Gradignan AIFIRA nanobeam line irradiation facility in France. Simulation and experimental conditions are described and the influence of Geant4 electromagnetic physics models is discussed.
THE IMPORTANCE OF THE MAGNETIC FIELD FROM AN SMA-CSO-COMBINED SAMPLE OF STAR-FORMING REGIONS
Koch, Patrick M.; Tang, Ya-Wen; Ho, Paul T. P.; Chen, Huei-Ru Vivien; Liu, Hau-Yu Baobab; Yen, Hsi-Wei; Lai, Shih-Ping; Zhang, Qizhou; Chen, How-Huan; Ching, Tao-Chung; Girart, Josep M.; Frau, Pau; Li, Hua-Bai; Li, Zhi-Yun; Padovani, Marco; Qiu, Keping; Rao, Ramprasad
2014-12-20
Submillimeter dust polarization measurements of a sample of 50 star-forming regions, observed with the Submillimeter Array (SMA) and the Caltech Submillimeter Observatory (CSO) covering parsec-scale clouds to milliparsec-scale cores, are analyzed in order to quantify the magnetic field importance. The magnetic field misalignment δ—the local angle between magnetic field and dust emission gradient—is found to be a prime observable, revealing distinct distributions for sources where the magnetic field is preferentially aligned with or perpendicular to the source minor axis. Source-averaged misalignment angles (|δ|) fall into systematically different ranges, reflecting the different source-magnetic field configurations. Possible bimodal (|δ|) distributions are found for the separate SMA and CSO samples. Combining both samples broadens the distribution with a wide maximum peak at small (|δ|) values. Assuming the 50 sources to be representative, the prevailing source-magnetic field configuration is one that statistically prefers small magnetic field misalignments |δ|. When interpreting |δ| together with a magnetohydrodynamics force equation, as developed in the framework of the polarization-intensity gradient method, a sample-based log-linear scaling fits the magnetic field tension-to-gravity force ratio (Σ {sub B}) versus (|δ|) with (Σ {sub B}) = 0.116 · exp (0.047 · (|δ|)) ± 0.20 (mean error), providing a way to estimate the relative importance of the magnetic field, only based on measurable field misalignments |δ|. The force ratio Σ {sub B} discriminates systems that are collapsible on average ((Σ {sub B}) < 1) from other molecular clouds where the magnetic field still provides enough resistance against gravitational collapse ((Σ {sub B}) > 1). The sample-wide trend shows a transition around (|δ|) ≈ 45°. Defining an effective gravitational force ∼1 – (Σ {sub B}), the average magnetic-field-reduced star formation efficiency is at least a
Colombo, Fabio A; Vidal, José E; Penalva de Oliveira, Augusto C; Hernandez, Adrián V; Bonasser-Filho, Francisco; Nogueira, Roberta S; Focaccia, Roberto; Pereira-Chioccola, Vera Lucia
2005-10-01
Cerebral toxoplasmosis is the most common cerebral focal lesion in AIDS and still accounts for high morbidity and mortality in Brazil. Its occurrence is more frequent in patients with low CD4(+) T-cell counts. It is directly related to the prevalence of anti-Toxoplasma gondii antibodies in the population. Therefore, it is important to evaluate sensitive, less invasive, and rapid diagnostic tests. We evaluated the value of PCR using peripheral blood samples on the diagnosis of cerebral toxoplasmosis and whether its association with immunological assays can contribute to a timely diagnosis. We prospectively analyzed blood samples from 192 AIDS patients divided into two groups. The first group was composed of samples from 64 patients with cerebral toxoplasmosis diagnosed by clinical and radiological features. The second group was composed of samples from 128 patients with other opportunistic diseases. Blood collection from patients with cerebral toxoplasmosis was done before or on the third day of anti-toxoplasma therapy. PCR for T. gondii, indirect immunofluorescence, enzyme-linked immunosorbent assay, and an avidity test for toxoplasmosis were performed on all samples. The PCR sensitivity and specificity for diagnosis of cerebral toxoplasmosis in blood were 80% and 98%, respectively. Patients with cerebral toxoplasmosis (89%) presented higher titers of anti-T. gondii IgG antibodies than patients with other diseases (57%) (P<0.001). These findings suggest the clinical value of the use of both PCR and high titers of anti-T. gondii IgG antibodies for the diagnosis of cerebral toxoplasmosis. This strategy may prevent more invasive approaches.
Colombo, Fabio A.; Vidal, José E.; Oliveira, Augusto C. Penalva de; Hernandez, Adrián V.; Bonasser-Filho, Francisco; Nogueira, Roberta S.; Focaccia, Roberto; Pereira-Chioccola, Vera Lucia
2005-01-01
Cerebral toxoplasmosis is the most common cerebral focal lesion in AIDS and still accounts for high morbidity and mortality in Brazil. Its occurrence is more frequent in patients with low CD4+ T-cell counts. It is directly related to the prevalence of anti-Toxoplasma gondii antibodies in the population. Therefore, it is important to evaluate sensitive, less invasive, and rapid diagnostic tests. We evaluated the value of PCR using peripheral blood samples on the diagnosis of cerebral toxoplasmosis and whether its association with immunological assays can contribute to a timely diagnosis. We prospectively analyzed blood samples from 192 AIDS patients divided into two groups. The first group was composed of samples from 64 patients with cerebral toxoplasmosis diagnosed by clinical and radiological features. The second group was composed of samples from 128 patients with other opportunistic diseases. Blood collection from patients with cerebral toxoplasmosis was done before or on the third day of anti-toxoplasma therapy. PCR for T. gondii, indirect immunofluorescence, enzyme-linked immunosorbent assay, and an avidity test for toxoplasmosis were performed on all samples. The PCR sensitivity and specificity for diagnosis of cerebral toxoplasmosis in blood were 80% and 98%, respectively. Patients with cerebral toxoplasmosis (89%) presented higher titers of anti-T. gondii IgG antibodies than patients with other diseases (57%) (P < 0.001). These findings suggest the clinical value of the use of both PCR and high titers of anti-T. gondii IgG antibodies for the diagnosis of cerebral toxoplasmosis. This strategy may prevent more invasive approaches. PMID:16207959
Meaney, Christopher; Moineddin, Rahim
2014-01-24
In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Glavin, D. P.; Conrad, P.; Dworkin, J. P.; Eigenbrode, J.; Mahaffy, P. R.
2011-01-01
The search for evidence of life on Mars and elsewhere will continue to be one of the primary goals of NASA s robotic exploration program over the next decade. NASA and ESA are currently planning a series of robotic missions to Mars with the goal of understanding its climate, resources, and potential for harboring past or present life. One key goal will be the search for chemical biomarkers including complex organic compounds important in life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, nucleobases and sugars which form the backbone of DNA and RNA, and lipids, the structural components of cell membranes. Many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1], though, their molecular characteristics may distinguish a biological source [2]. It is possible that in situ instruments may reveal such characteristics, however, return of the right sample (i.e. one with biosignatures or having a high probability of biosignatures) to Earth would allow for more intensive laboratory studies using a broad array of powerful instrumentation for bulk characterization, molecular detection, isotopic and enantiomeric compositions, and spatially resolved chemistry that may be required for confirmation of extant or extinct Martian life. Here we will discuss the current analytical capabilities and strategies for the detection of organics on the Mars Science Laboratory (MSL) using the Sample Analysis at Mars (SAM) instrument suite and how sample return missions from Mars and other targets of astrobiological interest will help advance our understanding of chemical biosignatures in the solar system.
Jayamani, Indumathy; Cupples, Alison M
2015-07-01
This study investigated the microorganisms involved in hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) degradation from a detonation area at a Navy base. Using Illumina sequencing, microbial communities were compared between the initial sample, samples following RDX degradation, and controls not amended with RDX to determine which phylotypes increased in abundance following RDX degradation. The effect of glucose on these communities was also examined. In addition, stable isotope probing (SIP) using labeled ((13)C3, (15)N3-ring) RDX was performed. Illumina sequencing revealed that several phylotypes were more abundant following RDX degradation compared to the initial soil and the no-RDX controls. For the glucose-amended samples, this trend was strong for an unclassified Pseudomonadaceae phylotype and for Comamonas. Without glucose, Acinetobacter exhibited the greatest increase following RDX degradation compared to the initial soil and no-RDX controls. Rhodococcus, a known RDX degrader, also increased in abundance following RDX degradation. For the SIP study, unclassified Pseudomonadaceae was the most abundant phylotype in the heavy fractions in both the presence and absence of glucose. In the glucose-amended heavy fractions, the 16S ribosomal RNA (rRNA) genes of Comamonas and Anaeromxyobacter were also present. Without glucose, the heavy fractions also contained the 16S rRNA genes of Azohydromonas and Rhodococcus. However, all four phylotypes were present at a much lower level compared to unclassified Pseudomonadaceae. Overall, these data indicate that unclassified Pseudomonadaceae was primarily responsible for label uptake in both treatments. This study indicates, for the first time, the importance of Comamonas for RDX removal.
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W
2017-02-01
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols.
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Quantum Monte Carlo for Molecules.
1986-12-01
AD-Ml?? Ml SITNEt MNOTE CARLO FOR OLEC ILES U) CALIFORNIA INEZY 1/ BERWLEY LRIWENCE BERKELEY LAB NI A LESTER ET AL UKLff~j~~lD61 DEC 66 MSW14-6 .3...SUMMARY REPORT 4. PERFORMING ORG. REPORT NUMBER S QUANTUM MONTE CARLO FOR MOLECULES ___ IU . AUTHOR(@) S. CONTRACT OR GRANT NUMSKR(.) S William A...DISTRIGUTION STATIEMEN4T (at the abstract entered in Block 20. it different from Report) - Quantum Monte Carlo importance functions molchuiner eqaio
Wrzus, Cornelia; Egloff, Boris; Riediger, Michaela
2017-08-01
Implicit association tests (IATs) are increasingly used to indirectly assess people's traits, attitudes, or other characteristics. In addition to measuring traits or attitudes, IAT scores also reflect differences in cognitive abilities because scores are based on reaction times (RTs) and errors. As cognitive abilities change with age, questions arise concerning the usage and interpretation of IATs for people of different age. To address these questions, the current study examined how cognitive abilities and cognitive processes (i.e., quad model parameters) contribute to IAT results in a large age-heterogeneous sample. Participants (N = 549; 51% female) in an age-stratified sample (range = 12-88 years) completed different IATs and 2 tasks to assess cognitive processing speed and verbal ability. From the IAT data, D2-scores were computed based on RTs, and quad process parameters (activation of associations, overcoming bias, detection, guessing) were estimated from individual error rates. Substantial IAT scores and quad processes except guessing varied with age. Quad processes AC and D predicted D2-scores of the content-specific IAT. Importantly, the effects of cognitive abilities and quad processes on IAT scores were not significantly moderated by participants' age. These findings suggest that IATs seem suitable for age-heterogeneous studies from adolescence to old age when IATs are constructed and analyzed appropriately, for example with D-scores and process parameters. We offer further insight into how D-scoring controls for method effects in IATs and what IAT scores capture in addition to implicit representations of characteristics. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Savoye, S; Michelot, J-L; Matray, J-M; Wittebroodt, Ch; Mifsud, A
2012-02-01
Argillaceous formations are thought to be suitable natural barriers to the release of radionuclides from a radioactive waste repository. However, the safety assessment of a waste repository hosted by an argillaceous rock requires knowledge of several properties of the host rock such as the hydraulic conductivity, diffusion properties and the pore water composition. This paper presents an experimental design that allows the determination of these three types of parameters on the same cylindrical rock sample. The reliability of this method was evaluated using a core sample from a well-investigated indurated argillaceous formation, the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL) (Switzerland). In this test, deuterium- and oxygen-18-depleted water, bromide and caesium were injected as tracer pulses in a reservoir drilled in the centre of a cylindrical core sample. The evolution of these tracers was monitored by means of samplers included in a circulation circuit for a period of 204 days. Then, a hydraulic test (pulse-test type) was performed. Finally, the core sample was dismantled and analysed to determine tracer profiles. Diffusion parameters determined for the four tracers are consistent with those previously obtained from laboratory through-diffusion and in-situ diffusion experiments. The reconstructed initial pore-water composition (chloride and water stable-isotope concentrations) was also consistent with those previously reported. In addition, the hydraulic test led to an estimate of hydraulic conductivity in good agreement with that obtained from in-situ tests.
NASA Astrophysics Data System (ADS)
Savoye, S.; Michelot, J.-L.; Matray, J.-M.; Wittebroodt, Ch.; Mifsud, A.
2012-02-01
Argillaceous formations are thought to be suitable natural barriers to the release of radionuclides from a radioactive waste repository. However, the safety assessment of a waste repository hosted by an argillaceous rock requires knowledge of several properties of the host rock such as the hydraulic conductivity, diffusion properties and the pore water composition. This paper presents an experimental design that allows the determination of these three types of parameters on the same cylindrical rock sample. The reliability of this method was evaluated using a core sample from a well-investigated indurated argillaceous formation, the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL) (Switzerland). In this test, deuterium- and oxygen-18-depleted water, bromide and caesium were injected as tracer pulses in a reservoir drilled in the centre of a cylindrical core sample. The evolution of these tracers was monitored by means of samplers included in a circulation circuit for a period of 204 days. Then, a hydraulic test (pulse-test type) was performed. Finally, the core sample was dismantled and analysed to determine tracer profiles. Diffusion parameters determined for the four tracers are consistent with those previously obtained from laboratory through-diffusion and in-situ diffusion experiments. The reconstructed initial pore-water composition (chloride and water stable-isotope concentrations) was also consistent with those previously reported. In addition, the hydraulic test led to an estimate of hydraulic conductivity in good agreement with that obtained from in-situ tests.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Alsaleh, Asma N; Whiley, David M; Bialasiewicz, Seweryn; Lambert, Stephen B; Ware, Robert S; Nissen, Michael D; Sloots, Theo P; Grimwood, Keith
2014-01-09
Carefully conducted, community-based, longitudinal studies are required to gain further understanding of the nature and timing of respiratory viruses causing infections in the population. However, such studies pose unique challenges for field specimen collection, including as we have observed the appearance of mould in some nasal swab specimens. We therefore investigated the impact of sample collection quality and the presence of visible mould in samples upon respiratory virus detection by real-time polymerase chain reaction (PCR) assays. Anterior nasal swab samples were collected from infants participating in an ongoing community-based, longitudinal, dynamic birth cohort study. The samples were first collected from each infant shortly after birth and weekly thereafter. They were then mailed to the laboratory where they were catalogued, stored at -80°C and later screened by PCR for 17 respiratory viruses. The quality of specimen collection was assessed by screening for human deoxyribonucleic acid (DNA) using endogenous retrovirus 3 (ERV3). The impact of ERV3 load upon respiratory virus detection and the impact of visible mould observed in a subset of swabs reaching the laboratory upon both ERV3 loads and respiratory virus detection was determined. In total, 4933 nasal swabs were received in the laboratory. ERV3 load in nasal swabs was associated with respiratory virus detection. Reduced respiratory virus detection (odds ratio 0.35; 95% confidence interval 0.27-0.44) was observed in samples where the ERV3 could not be identified. Mould was associated with increased time of samples reaching the laboratory and reduced ERV3 loads and respiratory virus detection. Suboptimal sample collection and high levels of visible mould can impact negatively upon sample quality. Quality control measures, including monitoring human DNA loads using ERV3 as a marker for epithelial cell components in samples should be undertaken to optimize the validity of real-time PCR results for
2014-01-01
Background Carefully conducted, community-based, longitudinal studies are required to gain further understanding of the nature and timing of respiratory viruses causing infections in the population. However, such studies pose unique challenges for field specimen collection, including as we have observed the appearance of mould in some nasal swab specimens. We therefore investigated the impact of sample collection quality and the presence of visible mould in samples upon respiratory virus detection by real-time polymerase chain reaction (PCR) assays. Methods Anterior nasal swab samples were collected from infants participating in an ongoing community-based, longitudinal, dynamic birth cohort study. The samples were first collected from each infant shortly after birth and weekly thereafter. They were then mailed to the laboratory where they were catalogued, stored at -80°C and later screened by PCR for 17 respiratory viruses. The quality of specimen collection was assessed by screening for human deoxyribonucleic acid (DNA) using endogenous retrovirus 3 (ERV3). The impact of ERV3 load upon respiratory virus detection and the impact of visible mould observed in a subset of swabs reaching the laboratory upon both ERV3 loads and respiratory virus detection was determined. Results In total, 4933 nasal swabs were received in the laboratory. ERV3 load in nasal swabs was associated with respiratory virus detection. Reduced respiratory virus detection (odds ratio 0.35; 95% confidence interval 0.27-0.44) was observed in samples where the ERV3 could not be identified. Mould was associated with increased time of samples reaching the laboratory and reduced ERV3 loads and respiratory virus detection. Conclusion Suboptimal sample collection and high levels of visible mould can impact negatively upon sample quality. Quality control measures, including monitoring human DNA loads using ERV3 as a marker for epithelial cell components in samples should be undertaken to optimize the
Extra Chance Generalized Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; Sanz-Serna, J. M.
2015-01-01
We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.
Ramirez, E; Guerra, P; Laosa, O; Duque, B; Tabares, B; Lei, S H; Carcas, A J; Frias, J
2008-08-01
Fulfilling bioequivalence criteria with highly variable drugs is difficult. The aim of this study was to compare the importance of sample size, intrasubject variability, and the point estimate of test and reference formulations with regard to meeting bioequivalence (BE) criteria [maximum observed plasma concentration (C(max)) and area under the concentration-time curve (AUC)]. We compared 137 pairs of data from BE studies with a conventional number of subjects, approximately 31-32 volunteers, developed in the last 10 years. The third part of the studies failed to demonstrate BE, in part due to an unacceptable difference between the mean ratios (T/R) (18) but also due to high variability with small differences between formulations (17). Increasing the number of subjects is hard to justify, and expanding the confidence interval (CI) was insufficient for the most highly variable drugs. Therefore, for low-variable drugs, the difference between formulations was the cornerstone of the fulfillment of BE criteria, but for highly variable drugs, the intrasubject coefficient of variability (ICV) was decisive. Our proposal is that for highly variable drugs that fall outside BE 90% CI limits could result in BE in the absence of formulation effect and maximal differences between formulations below 20%.
Stockman, Jamila K; Campbell, Jacquelyn C; Celentano, David D
2009-01-01
Objectives Recent evidence suggests that it is important to consider behavioral-specific sexual violence measures in assessing women’s risk behaviors. This study investigated associations of history and types of sexual coercion on HIV risk behaviors in a nationally representative sample of heterosexually active American women. Methods Analyses were based on 5,857 women aged 18–44 participating in the 2002 National Survey of Family Growth. Types of lifetime sexual coercion included: victim given alcohol or drugs, verbally pressured, threatened with physical injury, and physically injured. Associations with HIV risk behaviors were assessed using logistic regression. Results Of 5,857 heterosexually active women, 16.4% reported multiple sex partners and 15.3% reported substance abuse. A coerced first sexual intercourse experience and coerced sex after sexual debut were independently associated with multiple sex partners and substance abuse; the highest risk was observed for women reporting a coerced first sexual intercourse experience. Among types of sexual coercion, alcohol or drug use at coerced sex was independently associated with multiple sex partners and substance abuse. Conclusions Our findings suggest that public health strategies are needed to address the violent components of heterosexual relationships. Future research should utilize longitudinal and qualitative research to characterize the relationship between continuums of sexual coercion and HIV risk. PMID:19734802
Phuc, Pham Van; Ngoc, Vu Bich; Lam, Dang Hoang; Tam, Nguyen Thanh; Viet, Pham Quoc; Ngoc, Phan Kim
2012-06-01
It is known that umbilical cord blood (UCB) is a rich source of stem cells with practical and ethical advantages. Three important types of stem cells which can be harvested from umbilical cord blood and used in disease treatment are hematopoietic stem cells (HSCs), mesenchymal stem cells (MSCs) and endothelial progenitor cells (EPCs). Since these stem cells have shown enormous potential in regenerative medicine, numerous umbilical cord blood banks have been established. In this study, we examined the ability of banked UCB collected to produce three types of stem cells from the same samples with characteristics of HSCs, MSCs and EPCs. We were able to obtain homogeneous plastic rapidly-adherent cells (with characteristics of MSCs), slowly-adherent (with characteristics of EPCs) and non-adherent cells (with characteristics of HSCs) from the mononuclear cell fractions of cryopreserved UCB. Using a protocol of 48 h supernatant transferring, we successfully isolated MSCs which expressed CD13, CD44 and CD90 while CD34, CD45 and CD133 negative, had typical fibroblast-like shape, and was able to differentiate into adipocytes; EPCs which were CD34, and CD90 positive, CD13, CD44, CD45 and CD133 negative, adherent with cobble-like shape; HSCs which formed colonies when cultured in MethoCult medium.
Nakamura, Hideaki; Aniya, Masaru
2006-03-01
The density of states of Ag(2)O--B(2)O(3) glasses has been calculated by using a modified scale-transformed energy space sampling algorithm. This algorithm combines the scale-transformed energy space sampling algorithm and the Wang-Landau method. It is shown how the two algorithms can be combined to improve the efficiency of calculation. The thermodynamic properties, in particular the specific heat C(V), of the above-mentioned glass system is studied. At temperatures above 80 K, the value of specific heat C(v) is close to 22 J/mol/K. At low temperatures, the deviations of C(v) from a T(3) behavior are discernible, that is, C(v)/T(3) exhibits a hump at T = 7 K, which is in good agreement with the reported experimental behavior.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.
Joyal, Christian C
2015-12-01
interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining "normophilic" and "paraphilic" sexual fantasies in a population-based sample: On the importance of considering subgroups. Sex Med 2015;3:321-330.
CosmoMC: Cosmological MonteCarlo
NASA Astrophysics Data System (ADS)
Lewis, Antony; Bridle, Sarah
2011-06-01
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
NASA Astrophysics Data System (ADS)
Chan, H. Y.; Srinivasan, M. P.; Benistant, F.; Jin, H. M.; Chan, L.
2005-07-01
In this work, we study previously published Pearson models in amorphous silicon and present an improved Pearson IV model of ion implantation as a function of implant energy and crystal orientation for use in crystalline silicon. The first 4 moments of the Pearson IV distribution have been extracted from impurity profiles obtained from the Binary Collision Approximation (BCA) code, Crystal TRIM for a wide energy range 0.1-300 keV at varying tilts and rotations. By comparisons with experimental data, we show that certain amounts of channelling always occur in crystalline targets and the analytical Pearson technique should be replaced by a more robust method. We propose an alternative model based on sampling calibration of profiles and present implant tables that has been assimilated in the process simulator DIOS. Two-dimensional impurity profiles can be subsequently generated from these one-dimensional profiles when the lateral standard deviation is specified.
ERIC Educational Resources Information Center
Osborne, Jason W.
2011-01-01
Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found "not" to have modeled…
ERIC Educational Resources Information Center
Osborne, Jason W.
2011-01-01
Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found "not" to have modeled…
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. 80... requirements for producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. Beginning January 1, 2017, producers and importers of denatured fuel ethanol (DFE) and...
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Di Camillo, Mauro; Marinaro, Vanessa; Argnani, Fiorenza; Foglietta, Tiziana; Vernia, Piero
2006-01-01
BACKGROUND: The hydrogen breath test (H2BT) is the most widely used procedure in the diagnostic workup of lactose malabsorption and lactose intolerance. AIM: To establish whether a simplified two-or three-sample test may reduce time, costs and staff resources without reducing the sensitivity of the procedure. PATIENTS AND METHODS: Data from 1112 patients (292 men, 820 women) with a positive 4 h, nine-sample H2BT were retrospectively analyzed. Patients were stratified according to the degree of lactose malabsorption, the occurrence and type of symptoms. Loss of sensitivity in the procedure was evaluated taking into account two-sample tests (0 min and 120 min or 0 min and 210 min) or three-sample tests (0 min, 120 min and 180 min or 0 min, 120 min and 210 min). RESULTS: Using a two-sample test (0 min and 120 min or 0 min and 210 min) the false-negative rate was 33.4% and 22.7%, respectively. With a three-sample test (0 min, 120 min and 180 min or 0 min, 120 min or 210 min), lactose malabsorption was diagnosed in 91.2% (1014 of 1112) patients and in 96.1% (1068 of 1112) patients, respectively. Of 594 patients with abdominal symptoms, 158 (26.6%) and 73 (12.2%) would have false-negative results with 0 min and 120 min or 0 min and 210 min two-sample tests, respectively. The three-sample tests, 0 min, 120 min and 180 min or 0 min, 120 min and 210 min, have a false-negative rate of 5.9% and 2.1%, respectively. CONCLUSIONS: A three-sample H2BT is time-and cost-sparing without significant loss of sensitivity for the diagnosis both of lactose malabsorption and lactose intolerance. PMID:16609755
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Därr, Roland; Pamporaki, Christina; Peitzsch, Mirko; Miehle, Konstanze; Prejbisz, Aleksander; Peczkowska, Mariola; Weismann, Dirk; Beuschlein, Felix; Sinnott, Richard; Bornstein, Stefan R; Neumann, Hartmut P; Januszewicz, Andrzej; Lenders, Jacques; Eisenhofer, Graeme
2014-04-01
To document the influences of blood sampling under supine fasting versus seated nonfasting conditions on diagnosis of phaeochromocytomas and paragangliomas (PPGL) using plasma concentrations of normetanephrine, metanephrine and methoxytyramine. Biochemical testing for PPGL was performed on 762 patients at six centres, two of which complied with requirements for supine sampling after an overnight fast and four of which did not. Phaeochromocytomas and paragangliomas were found in 129 patients (67 noncompliant, 62 compliant) and not in 633 patients (195 noncompliant, 438 compliant). Plasma concentrations of normetanephrine and methoxytyramine did not differ between compliant and noncompliant sampling conditions in patients with PPGL but were 49-51% higher in patients without PPGL sampled under noncompliant compared with compliant conditions. The 97·5 percentiles of distributions were also higher under noncompliant compared with compliant conditions for normetanephrine (1·29 vs 0·79 nmol/l), metanephrine (0·49 vs 0·41 nmol/l) and methoxytyramine (0·42 vs 0·18 nmol/l). Use of upper cut-offs established from seated nonfasting sampling conditions resulted in substantially decreased diagnostic sensitivity (98% vs 85%). In contrast, use of upper cut-offs established from supine fasting conditions resulted in decreased diagnostic specificity for testing under noncompliant compared with compliant conditions (71% vs 95%). High diagnostic sensitivity of plasma normetanephrine, metanephrine and methoxytyramine for the detection of PPGL can only be guaranteed using upper cut-offs of reference intervals established with blood sampling under supine fasting conditions. With such cut-offs, sampling under seated nonfasting conditions can lead to a 5·7-fold increase in false-positive results necessitating repeat sampling under supine fasting conditions. © 2013 John Wiley & Sons Ltd.
Hartman, D; Benton, L; Morenos, L; Beyer, J; Spiden, M; Stock, A
2011-02-25
The identification of disaster victims through the use of DNA analysis is an integral part of any Disaster Victim Identification (DVI) response, regardless of the scale and nature of the disaster. As part of the DVI response to the 2009 Victorian Bushfires Disaster, DNA analysis was performed to assist in the identification of victims through kinship (familial matching to relatives) or direct (self source sample) matching of DNA profiles. Although most of the DNA identifications achieved were to reference samples from relatives, there were a number of DNA identifications (12) made through direct matching. Guthrie cards, which have been collected in Australia over the past 30 years, were used to provide direct reference samples. Of the 236 ante-mortem (AM) samples received, 21 were Guthrie cards and one was a biopsy specimen; all yielding complete DNA profiles when genotyped. This publication describes the use of such Biobanks and medical specimens as a sample source for the recovery of good quality DNA for comparisons to post-mortem (PM) samples.
On a full Monte Carlo approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Observations on variational and projector Monte Carlo methods.
Umrigar, C J
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Observations on variational and projector Monte Carlo methods
Umrigar, C. J.
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of denaturant designated as suitable for the manufacture of denatured fuel ethanol... suitable for the manufacture of denatured fuel ethanol meeting federal quality requirements. Beginning January 1, 2017, or on the first day that any producer or importer of ethanol denaturant designates...
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
... risk posed by the plants for planting. The risk-based sampling and inspection approach will allow us to... plant part) for or capable of propagation, including a tree, a tissue culture, a plantlet culture... articles (other than seeds, bulbs, or sterile cultures of orchid plants) from any country or locality...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods w...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods w...
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
USDA-ARS?s Scientific Manuscript database
Hypoglycin A (HGA) is a toxic amino acid that is naturally produced in unripe ackee fruit. In 1973 the FDA placed a worldwide import alert on ackee fruit, which banned the product from entering the U.S. The FDA has considered establishing a regulatory limit for HGA and lifting the ban, which will re...
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Fast Monte Carlo for radiation therapy: the PEREGRINE Project
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
Hill, Robert V
2005-08-01
Several mutually exclusive hypotheses have been advanced to explain the phylogenetic position of turtles among amniotes. Traditional morphology-based analyses place turtles among extinct anapsids (reptiles with a solid skull roof), whereas more recent studies of both morphological and molecular data support an origin of turtles from within Diapsida (reptiles with a doubly fenestrated skull roof). Evaluation of these conflicting hypotheses has been hampered by nonoverlapping taxonomic samples and the exclusion of significant taxa from published analyses. Furthermore, although data from soft tissues and anatomical systems such as the integument may be particularly relevant to this problem, they are often excluded from large-scale analyses of morphological systematics. Here, conflicting hypotheses of turtle relationships are tested by (1) combining published data into a supermatrix of morphological characters to address issues of character conflict and missing data; (2) increasing taxonomic sampling by more than doubling the number of operational taxonomic units to test internal relationships within suprageneric ingroup taxa; and (3) increasing character sampling by approximately 25% by adding new data on the osteology and histology of the integument, an anatomical system that has been historically underrepresented in morphological systematics. The morphological data set assembled here represents the largest yet compiled for Amniota. Reevaluation of character data from prior studies of amniote phylogeny favors the hypothesis that turtles indeed have diapsid affinities. Addition of new ingroup taxa alone leads to a decrease in overall phylogenetic resolution, indicating that existing characters used for amniote phylogeny are insufficient to explain the evolution of more highly nested taxa. Incorporation of new data from the soft and osseous components of the integument, however, helps resolve relationships among both basal and highly nested amniote taxa. Analysis of a
Gonçalves, M; Peralta, A R; Monteiro Ferreira, J; Guilleminault, Christian
2015-01-01
Sleepiness is considered to be a leading cause of crashes. Despite the huge amount of information collected in questionnaire studies, only some are based on representative samples of the population. Specifics of the populations studied hinder the generalization of these previous findings. For the Portuguese population, data from sleep-related car crashes/near misses and sleepiness while driving are missing. The objective of this study is to determine the prevalence of near-miss and nonfatal motor vehicle crashes related to sleepiness in a representative sample of Portuguese drivers. Structured phone interviews regarding sleepiness and sleep-related crashes and near misses, driving habits, demographic data, and sleep quality were conducted using the Pittsburgh Sleep Quality Index and sleep apnea risk using the Berlin questionnaire. A multivariate regression analysis was used to determine the associations with sleepy driving (feeling sleepy or falling asleep while driving) and sleep-related near misses and crashes. Nine hundred subjects, representing the Portuguese population of drivers, were included; 3.1% acknowledged falling asleep while driving during the previous year and 0.67% recalled sleepiness-related crashes. Higher education, driving more than 15,000 km/year, driving more frequently between 12:00 a.m. and 6 a.m., fewer years of having a driver's license, less total sleep time per night, and higher scores on the Epworth Sleepiness Scale (ESS) were all independently associated with sleepy driving. Sleepiness-related crashes and near misses were associated only with falling asleep at the wheel in the previous year. Sleep-related crashes occurred more frequently in drivers who had also had sleep-related near misses. Portugal has lower self-reported sleepiness at the wheel and sleep-related near misses than most other countries where epidemiological data are available. Different population characteristics and cultural, social, and road safety specificities may
NASA Astrophysics Data System (ADS)
Mouslopoulou, Vasiliki; Nicol, Andrew; Walsh, John; Begg, John; Townsend, Dougal; Hristopulos, Dionissios
2013-04-01
The catastrophic earthquakes that recently (September 4th, 2010 and February 22nd, 2011) hit Christchurch, New Zealand, show that active faults, capable of generating large-magnitude earthquakes, can be hidden beneath the Earth's surface. In this study we combine near-surface paleoseismic data with deep (<5 km) onshore seismic-reflection lines to explore the growth of normal faults over short (<27 kyr) and long (>1 Ma) timescales in the Taranaki Rift, New Zealand. Our analysis shows that the integration of different timescale datasets provides a basis for identifying active faults not observed at the ground surface, estimating maximum fault-rupture lengths, inferring maximum short-term displacement rates and improving earthquake hazard assessment. We find that fault displacement rates become increasingly irregular (both faster and slower) on shorter timescales, leading to incomplete sampling of the active-fault population. Surface traces have been recognised for <50% of the active faults and along ∼50% of their lengths. The similarity of along-strike displacement profiles for short and long time intervals suggests that fault lengths and maximum single-event displacements have not changed over the last 3.6 Ma. Therefore, rate changes are likely to reflect temporal adjustments in earthquake recurrence intervals due to fault interactions and associated migration of earthquake activity within the rift.
NASA Astrophysics Data System (ADS)
Mouslopoulou, Vasiliki; Nicol, Andrew; Walsh, John J.; Begg, John G.; Townsend, Dougal B.; Hristopulos, Dionissios T.
2012-03-01
The catastrophic earthquakes that recently (September 4th, 2010 and February 22nd, 2011) hit Christchurch, New Zealand, show that active faults, capable of generating large-magnitude earthquakes, can be hidden beneath the Earth's surface. In this article we combine near-surface paleoseismic data with deep (<5 km) onshore seismic-reflection lines to explore the growth of normal faults over short (<27 kyr) and long (>1 Ma) timescales in the Taranaki Rift, New Zealand. Our analysis shows that the integration of different timescale datasets provides a basis for identifying active faults not observed at the ground surface, estimating maximum fault-rupture lengths, inferring maximum short-term displacement rates and improving earthquake hazard assessment. We find that fault displacement rates become increasingly irregular (both faster and slower) on shorter timescales, leading to incomplete sampling of the active-fault population. Surface traces have been recognised for <50% of the active faults and along ≤50% of their lengths. The similarity of along-strike displacement profiles for short and long time intervals suggests that fault lengths and maximum single-event displacements have not changed over the last 3.6 Ma. Therefore, rate changes are likely to reflect temporal adjustments in earthquake recurrence intervals due to fault interactions and associated migration of earthquake activity within the rift.
Peterson, A Townsend; Moses, Lina M; Bausch, Daniel G
2014-01-01
Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk.
Chamorro-Premuzic, Tomas; Reimers, Stian; Hsu, Anne; Ahmetoglu, Gorkan
2009-08-01
The present study examined individual differences in artistic preferences in a sample of 91,692 participants (60% women and 40% men), aged 13-90 years. Participants completed a Big Five personality inventory (Goldberg, 1999) and provided preference ratings for 24 different paintings corresponding to cubism, renaissance, impressionism, and Japanese art, which loaded on to a latent factor of overall art preferences. As expected, the personality trait openness to experience was the strongest and only consistent personality correlate of artistic preferences, affecting both overall and specific preferences, as well as visits to galleries, and artistic (rather than scientific) self-perception. Overall preferences were also positively influenced by age and visits to art galleries, and to a lesser degree, by artistic self-perception and conscientiousness (negatively). As for specific styles, after overall preferences were accounted for, more agreeable, more conscientious and less open individuals reported higher preference levels for impressionist, younger and more extraverted participants showed higher levels of preference for cubism (as did males), and younger participants, as well as males, reported higher levels of preferences for renaissance. Limitations and recommendations for future research are discussed.
Compressible generalized hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Fang, Youhan; Sanz-Serna, J. M.; Skeel, Robert D.
2014-05-01
One of the most demanding calculations is to generate random samples from a specified probability distribution (usually with an unknown normalizing prefactor) in a high-dimensional configuration space. One often has to resort to using a Markov chain Monte Carlo method, which converges only in the limit to the prescribed distribution. Such methods typically inch through configuration space step by step, with acceptance of a step based on a Metropolis(-Hastings) criterion. An acceptance rate of 100% is possible in principle by embedding configuration space in a higher dimensional phase space and using ordinary differential equations. In practice, numerical integrators must be used, lowering the acceptance rate. This is the essence of hybrid Monte Carlo methods. Presented is a general framework for constructing such methods under relaxed conditions: the only geometric property needed is (weakened) reversibility; volume preservation is not needed. The possibilities are illustrated by deriving a couple of explicit hybrid Monte Carlo methods, one based on barrier-lowering variable-metric dynamics and another based on isokinetic dynamics.
Tang, Ke; Wong, Samuel W.K.; Liu, Jun S.; Zhang, Jinfeng; Liang, Jie
2015-01-01
Motivation: Loops in proteins are often involved in biochemical functions. Their irregularity and flexibility make experimental structure determination and computational modeling challenging. Most current loop modeling methods focus on modeling single loops. In protein structure prediction, multiple loops often need to be modeled simultaneously. As interactions among loops in spatial proximity can be rather complex, sampling the conformations of multiple interacting loops is a challenging task. Results: In this study, we report a new method called multi-loop Distance-guided Sequential chain-Growth Monte Carlo (M-DiSGro) for prediction of the conformations of multiple interacting loops in proteins. Our method achieves an average RMSD of 1.93 Å for lowest energy conformations of 36 pairs of interacting protein loops with the total length ranging from 12 to 24 residues. We further constructed a data set containing proteins with 2, 3 and 4 interacting loops. For the most challenging target proteins with four loops, the average RMSD of the lowest energy conformations is 2.35 Å. Our method is also tested for predicting multiple loops in β-barrel membrane proteins. For outer-membrane protein G, the lowest energy conformation has a RMSD of 2.62 Å for the three extracellular interacting loops with a total length of 34 residues (12, 12 and 10 residues in each loop). Availability and implementation: The software is freely available at: tanto.bioe.uic.edu/m-DiSGro. Contact: jinfeng@stat.fsu.edu or jliang@uic.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25861965
Löwbeer, C; Kawakami, T; Tähepĵld, P; Gustafsson, S A; Vaage, J; Valen, G
2002-01-01
The isolated, buffer-perfused heart is probably the most widely used model in experimental heart research, and the coronary effluent is often analysed for markers of myocardial injury. Adsorption to surrounding materials may be a serious problem of protein measurements in solutions with low protein concentrations. The aims of the present study were to investigate the importance of the preanalytical phase when measuring cardiac troponin T (cTnT) in a buffer perfusate and to investigate whether addition of albumin to the effluent might increase recovery of cTnT and improve the assay. Coronary effluent was collected in tubes of different materials and in tubes with 40 g/L bovine albumin, and then frozen. cTnT was analysed at different time points after withdrawal from the freezer. cTnT was 2.3-119 times higher in effluent with albumin. In effluent without albumin, cTnT concentration declined to 2% of the initial concentration after two episodes of freezing and thawing. The cTnT loss could not be prevented by using polystyrene or siliconized glass, but was partially inhibited in effluent with albumin. Furthermore, creatine kinase and lactate dehydrogenase levels were higher in effluent with albumin. The within-series coefficient of variation for cTnT was markedly improved when using effluent with albumin.
Papadopoulos, Costas; Frontistis, Zacharias; Antonopoulou, Maria; Venieri, Danae; Konstantinou, Ioannis; Mantzavinos, Dionissios
2016-07-01
The sonochemical degradation of ethyl paraben (EP), a representative of the parabens family, was investigated. Experiments were conducted at constant ultrasound frequency of 20 kHz and liquid bulk temperature of 30 °C in the following range of experimental conditions: EP concentration 250-1250 μg/L, ultrasound (US) density 20-60 W/L, reaction time up to 120 min, initial pH 3-8 and sodium persulfate 0-100mg/L, either in ultrapure water or secondary treated wastewater. A factorial design methodology was adopted to elucidate the statistically important effects and their interactions and a full empirical model comprising seventeen terms was originally developed. Omitting several terms of lower significance, a reduced model that can reliably simulate the process was finally proposed; this includes EP concentration, reaction time, power density and initial pH, as well as the interactions (EP concentration)×(US density), (EP concentration)×(pHo) and (EP concentration)×(time). Experiments at an increased EP concentration of 3.5mg/L were also performed to identify degradation by-products. LC-TOF-MS analysis revealed that EP sonochemical degradation occurs through dealkylation of the ethyl chain to form methyl paraben, while successive hydroxylation of the aromatic ring yields 4-hydroxybenzoic, 2,4-dihydroxybenzoic and 3,4-dihydroxybenzoic acids. By-products are less toxic to bacterium V. fischeri than the parent compound.
Card, Roderick; Vaughan, Kelly; Bagnall, Mary; Spiropoulos, John; Cooley, William; Strickland, Tony; Davies, Rob; Anjum, Muna F.
2016-01-01
Salmonella enterica is a foodborne zoonotic pathogen of significant public health concern. We have characterized the virulence and antimicrobial resistance gene content of 95 Salmonella isolates from 11 serovars by DNA microarray recovered from UK livestock or imported meat. Genes encoding resistance to sulphonamides (sul1, sul2), tetracycline [tet(A), tet(B)], streptomycin (strA, strB), aminoglycoside (aadA1, aadA2), beta-lactam (blaTEM), and trimethoprim (dfrA17) were common. Virulence gene content differed between serovars; S. Typhimurium formed two subclades based on virulence plasmid presence. Thirteen isolates were selected by their virulence profile for pathotyping using the Galleria mellonella pathogenesis model. Infection with a chicken invasive S. Enteritidis or S. Gallinarum isolate, a multidrug resistant S. Kentucky, or a S. Typhimurium DT104 isolate resulted in high mortality of the larvae; notably presence of the virulence plasmid in S. Typhimurium was not associated with increased larvae mortality. Histopathological examination showed that infection caused severe damage to the Galleria gut structure. Enumeration of intracellular bacteria in the larvae 24 h post-infection showed increases of up to 7 log above the initial inoculum and transmission electron microscopy (TEM) showed bacterial replication in the haemolymph. TEM also revealed the presence of vacuoles containing bacteria in the haemocytes, similar to Salmonella containing vacuoles observed in mammalian macrophages; although there was no evidence from our work of bacterial replication within vacuoles. This work shows that microarrays can be used for rapid virulence genotyping of S. enterica and that the Galleria animal model replicates some aspects of Salmonella infection in mammals. These procedures can be used to help inform on the pathogenicity of isolates that may be antibiotic resistant and have scope to aid the assessment of their potential public and animal health risk. PMID:27199965
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Probabilistic Assessments of the Plate Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Ismail, A. E.; Ariffin, A. K.; Abdullah, S.; Ghazali, M. J.
2011-02-01
This paper presents the probabilistic analysis of the plate with a hole using several multiaxial high cycle fatigue criteria (MHFC). Dang Van, Sines, Crossland criteria were used and von Mises criterion was also considered for comparison purpose. Parametric finite element model of the plate was developed and several important random variable parameters were selected and Latin Hypercube Sampling Monte-Carlo Simulation (LHS-MCS) was used for probabilistic analysis tool. It was found that, different structural reliability and sensitivity factors were obtained using different failure criteria. According to the results multiaxial fatigue criteria are the most significant criteria need to be considered in assessing all the structural behavior especially under complex loadings.
Monte Carlo Test Assembly for Item Pool Analysis and Extension
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2005-01-01
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.
Monte Carlo studies of uranium calorimetry
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.
Monte Carlo inversion of seismic data
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
Shell model the Monte Carlo way
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Study on phase function in Monte Carlo transmission characteristics of poly-disperse aerosol
NASA Astrophysics Data System (ADS)
Bai, Lu; Wu, Zhen-Sen; Tang, Shuang-Qing; Li, Ming; Xie, Pin-Hua; Wang, Shi-Mei
2011-01-01
Henyey-Greenstein (H-G) phase function is typically used as an approximation to Mie phase function and its shortcomings have been discussed in numerous papers. But the judicious criterion of when the H-G phase function would be valid is still ambiguous. In this paper, we use the direct sample phase function method in transmittance calculation. A comparison of the direct sample phase function method and the H-G phase function is presented. The percentage of the multiple scattering in Monte Carlo transfer computations is discussed. Numerical results showed that using H-G phase function led to underestimating the transmittance. The deflection of root means square error can be used as a criterion. Although the exact calculation of sample phase function requires slightly more computation time, the rigorous phase function simulation method has an important role in the Monte Carlo radiative transfer computation problems.
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Monte Carlo algorithms for Brownian phylogenetic models
Horvilleur, Benjamin; Lartillot, Nicolas
2014-01-01
Motivation: Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. Results: A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. Availability: The program is freely available at www.phylobayes.org Contact: nicolas.lartillot@univ-lyon1.fr PMID:25053744
Barfi, Azadeh; Nazem, Habibollah; Saeidi, Iman; Peyrovi, Moazameh; Afsharzadeh, Maryam; Barfi, Behruz; Salavati, Hossein
2016-03-20
In the present study, an efficient and environmental friendly method (called in-syringe reversed dispersive liquid-liquid microextraction (IS-R-DLLME)) was developed to extract three important components (i.e. para-anisaldehyde, trans-anethole and its isomer estragole) simultaneously in different plant extracts (basil, fennel and tarragon), human plasma and urine samples prior their determination using high-performance liquid chromatography. The importance of choosing these plant extracts as samples is emanating from the dual roles of their bioactive compounds (trans-anethole and estragole), which can alter positively or negatively different cellular processes, and necessity to a simple and efficient method for extraction and sensitive determination of these compounds in the mentioned samples. Under the optimum conditions (including extraction solvent: 120 μL of n-octanol; dispersive solvent: 600 μL of acetone; collecting solvent: 1000 μL of acetone, sample pH 3; with no salt), limits of detection (LODs), linear dynamic ranges (LDRs) and recoveries (R) were 79-81 ng mL(-1), 0.26-6.9 μg mL(-1) and 94.1-99.9%, respectively. The obtained results showed that the IS-R-DLLME was a simple, fast and sensitive method with low level consumption of extraction solvent which provides high recovery under the optimum conditions. The present method was applied to investigate the absorption amounts of the mentioned analytes through the determination of the analytes before (in the plant extracts) and after (in the human plasma and urine samples) the consumption which can determine the toxicity levels of the analytes (on the basis of their dosages) in the extracts.
MORSE Monte Carlo shielding calculations for the zirconium hydride reference reactor
NASA Technical Reports Server (NTRS)
Burgart, C. E.
1972-01-01
Verification of DOT-SPACETRAN transport calculations of a lithium hydride and tungsten shield for a SNAP reactor was performed using the MORSE (Monte Carlo) code. Transport of both neutrons and gamma rays was considered. Importance sampling was utilized in the MORSE calculations. Several quantities internal to the shield, as well as dose at several points outside of the configuration, were in satisfactory agreement with the DOT calculations of the same.
Dynamically stratified Monte Carlo forecasting
NASA Technical Reports Server (NTRS)
Schubert, Siegfried; Suarez, Max; Schemm, Jae-Kyung; Epstein, Edward
1992-01-01
A new method for performing Monte Carlo forecasts is introduced. The method, called dynamic stratification, selects initial perturbations based on a stratification of the error distribution. A simple implementation is presented in which the error distribution used for the stratification is estimated from a linear model derived from a large ensemble of 12-h forecasts with the full dynamic model. The stratification thus obtained is used to choose a small subsample of initial states with which to perform the dynamical Monte Carlo forecasts. Several test cases are studied using a simple two-level general circulation model with uncertain initial conditions. It is found that the method provides substantial reductions in the sampling error of the forecast mean and variance when compared to the more traditional approach of choosing the initial perturbations at random. The degree of improvement, however, is sensitive to the nature of the initial error distribution and to the base state. In practice the method may be viable only if the computational burden involved in obtaining an adequate estimate of the error distribution is shared with the data-assimilation procedure.
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
Eriksson, Andreas; Giske, Christian G; Ternhag, Anders
2013-01-01
To determine the distribution of urinary tract pathogens with focus on Staphylococcus saprophyticus and analyse the seasonality, antibiotic susceptibility, and gender and age distributions in a large Swedish cohort. S. saprophyticus is considered an important causative agent of urinary tract infection (UTI) in young women, and some earlier studies have reported up to approximately 40% of UTIs in this patient group being caused by S. saprophyticus. We hypothesized that this may be true only in very specific outpatient settings. During the year 2010, 113,720 urine samples were sent for culture to the Karolinska University Hospital, from both clinics in the hospital and from primary care units. Patient age, gender and month of sampling were analysed for S. saprophyticus, Escherichia coli, Klebsiella pneumoniae and Proteus mirabilis. Species data were obtained for 42,633 (37%) of the urine samples. The most common pathogens were E. coli (57.0%), Enterococcus faecalis (6.5%), K. pneumoniae (5.9%), group B streptococci (5.7%), P. mirabilis (3.0%) and S. saprophyticus (1.8%). The majority of subjects with S. saprophyticus were women 15-29 years of age (63.8%). In this age group, S. saprophyticus constituted 12.5% of all urinary tract pathogens. S. saprophyticus is a common urinary tract pathogen in young women, but its relative importance is low compared with E. coli even in this patient group. For women in other ages and for men, growth of S. saprophyticus is a quite uncommon finding.
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen
2016-10-01
Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.
Dyrenforth, Portia S; Kashy, Deborah A; Donnellan, M Brent; Lucas, Richard E
2010-10-01
Three very large, nationally representative samples of married couples were used to examine the relative importance of 3 types of personality effects on relationship and life satisfaction: actor effects, partner effects, and similarity effects. Using data sets from Australia (N = 5,278), the United Kingdom (N = 6,554), and Germany (N = 11,418) provided an opportunity to test whether effects replicated across samples. Actor effects accounted for approximately 6% of the variance in relationship satisfaction and between 10% and 15% of the variance in life satisfaction. Partner effects (which were largest for Agreeableness, Conscientiousness, and Emotional Stability) accounted for between 1% and 3% of the variance in relationship satisfaction and between 1% and 2% of the variance in life satisfaction. Couple similarity consistently explained less than .5% of the variance in life and relationship satisfaction after controlling for actor and partner effects.
Dåderman, Anna Maria; Strindlund, Hans; Wiklund, Nils; Fredriksen, Svend-Otto; Lidberg, Lars
2003-10-14
The sedative-hypnotic benzodiazepine flunitrazepam (FZ) is abused worldwide. The purpose of our study was to investigate violence and anterograde amnesia following intoxication with FZ, and how this was legally evaluated in forensic psychiatric investigations with the objective of drawing some conclusions about the importance of urine sample in a case of a suspected intoxication with FZ. The case was a 23-year-old male university student who, intoxicated with FZ (and possibly with other substances such as diazepam, amphetamines or cannabis), first stabbed an acquaintance and, 2 years later, two friends to death. The police investigation files, including video-typed interviews, the forensic psychiatric files, and also results from the forensic autopsy of the victims, were compared with the information obtained from the case. Only partial recovery from anterograde amnesia was shown during a period of several months. Some important new information is contained in this case report: a forensic analysis of blood sample instead of a urine sample, might lead to confusion during police investigation and forensic psychiatric assessment (FPA) of an FZ abuser, and in consequence wrong legal decisions. FZ, alone or combined with other substances, induces severe violence and is followed by anterograde amnesia. All cases of bizarre, unexpected aggression followed by anterograde amnesia should be assessed for abuse of FZ. A urine sample is needed in case of suspected FZ intoxication. The police need to be more aware of these issues, and they must recognise that they play a crucial role in an assessment procedure. Declaring FZ an illegal drug is strongly recommended.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Halme, Alex S; Fritel, Xavier; Benedetti, Andrea; Eng, Ken; Tannenbaum, Cara
2015-03-01
Sample size calculations for treatment trials that aim to assess health-related quality-of-life (HRQOL) outcomes are often difficult to perform. Researchers must select a target minimal clinically important difference (MCID) in HRQOL for the trial, estimate the effect size of the intervention, and then consider the responsiveness of different HRQOL measures for detecting improvements. Generic preference-based HRQOL measures are usually less sensitive to gains in HRQOL than are disease-specific measures, but are nonetheless recommended to quantify an impact on HRQOL that can be translated into quality-adjusted life-years during cost-effectiveness analyses. Mapping disease-specific measures onto generic measures is a proposed method for yielding more efficient sample size requirements while retaining the ability to generate utility weights for cost-effectiveness analyses. This study sought to test this mapping strategy to calculate and compare the effect on sample size of three different methods. Three different methods were used for determining an MCID in HRQOL in patients with incontinence: 1) a global rating of improvement, 2) an incontinence-specific HRQOL instrument, and 3) a generic preference-based HRQOL instrument using mapping coefficients. The sample size required to detect a 20% difference in the MCID for the global rating of improvement was 52 per trial arm, 172 per arm for the incontinence-specific HRQOL outcome, and 500 per arm for the generic preference-based HRQOL outcome. We caution that treatment trials of conditions for which improvements are not easy to measure on generic HRQOL instruments will still require significantly greater sample size even when mapping functions are used to try to gain efficiency. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Pitchure, D J; Ricker, R E; Williams, M E; Claggett, S A
2010-01-01
Since many household systems are fabricated out of metallic materials, changes to the household environment that accelerate corrosion rates will increase the frequency of failures in these systems. Recently, it has been reported that homes constructed with imported wallboard have increased failure rates in appliances, air conditioner heat exchanger coils, and visible corrosion on electrical wiring and other metal components. At the request of the Consumer Product Safety Commission (CPSC), the National Institute of Standards and Technology (NIST) became involved through the Interagency Agreement CPSC-1-09-0023 to perform metallurgical analyses on samples and corrosion products removed from homes constructed using imported wallboard. This document reports on the analysis of the first group of samples received by NIST from CPSC. The samples received by NIST on September 28, 2009 consisted of copper tubing for supplying natural gas and two air conditioner heat exchanger coils. The examinations performed by NIST consisted of photography, metallurgical cross-sectioning, optical microscopy, scanning electron microscopy (SEM), and x-ray diffraction (XRD). Leak tests were also performed on the air conditioner heat exchanger coils. The objective of these examinations was to determine extent and nature of the corrosive attack, the chemical composition of the corrosion product, and the potential chemical reactions or environmental species responsible for accelerated corrosion. A thin black corrosion product was found on samples of the copper tubing. The XRD analysis of this layer indicated that this corrosion product was a copper sulfide phase and the diffraction peaks corresponded with those for the mineral digenite (Cu9S5). Corrosion products were also observed on other types of metals in the air conditioner coils where condensation would frequently wet the metals. The thickness of the corrosion product layer on a copper natural gas supply pipe with a wall thickness of 1
Pitchure, D. J.; Ricker, R. E.; Williams, M. E.; Claggett, S. A.
2010-01-01
Since many household systems are fabricated out of metallic materials, changes to the household environment that accelerate corrosion rates will increase the frequency of failures in these systems. Recently, it has been reported that homes constructed with imported wallboard have increased failure rates in appliances, air conditioner heat exchanger coils, and visible corrosion on electrical wiring and other metal components. At the request of the Consumer Product Safety Commission (CPSC), the National Institute of Standards and Technology (NIST) became involved through the Interagency Agreement CPSC-1-09-0023 to perform metallurgical analyses on samples and corrosion products removed from homes constructed using imported wallboard. This document reports on the analysis of the first group of samples received by NIST from CPSC. The samples received by NIST on September 28, 2009 consisted of copper tubing for supplying natural gas and two air conditioner heat exchanger coils. The examinations performed by NIST consisted of photography, metallurgical cross-sectioning, optical microscopy, scanning electron microscopy (SEM), and x-ray diffraction (XRD). Leak tests were also performed on the air conditioner heat exchanger coils. The objective of these examinations was to determine extent and nature of the corrosive attack, the chemical composition of the corrosion product, and the potential chemical reactions or environmental species responsible for accelerated corrosion. A thin black corrosion product was found on samples of the copper tubing. The XRD analysis of this layer indicated that this corrosion product was a copper sulfide phase and the diffraction peaks corresponded with those for the mineral digenite (Cu9S5). Corrosion products were also observed on other types of metals in the air conditioner coils where condensation would frequently wet the metals. The thickness of the corrosion product layer on a copper natural gas supply pipe with a wall thickness of 1
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
A palaeomagnetic study of Apennine thrusts, Italy: Monte Maiella and Monte Raparo
NASA Astrophysics Data System (ADS)
Jackson, K. C.
1990-06-01
Three separate structural blocks within the southern Apennines have been sampled for palaeomagnetic investigation to constrain their original separation and movement during mid-Tertiary deformation. The Mesozoic limestones are weakly magnetic, and the NRM intensity of all samples from Upper and Lower Cretaceous limestone from the Alburni platform and from Upper Cretaceous limestone at Monte Maiella were too low to yield results. Lower Cretaceous limestone at Monte Maiella contained a mean magnetisation (after structural correction) of D = 326°, I = +42°, k = 44, N = 9 (57°N, 263°E); and Cretaceous (?) limestone at Monte Raparo a mean of D = 132°, I = -61°, k = 50, N = 19 (54° N, 306°E). The Monte Maiella results, near the central part of the Apennine thrust-front, are compatible with a local, clockwise block-rotation during deformation, while Monte Raparo results may bear evidence of the major east-west thrust-motion during shortening in addition to anticlockwise block-rotations already reported from the southernmost Apennines.
Ierardo, Gaetano; Corridore, Denise; Di Carlo, Gabriele; Di Giorgio, Gianni; Leonardi, Emanuele; Campus, Guglielmo-Giuseppe; Vozza, Iole; Polimeni, Antonella; Bossù, Maurizio
2017-01-01
Background Data from epidemiological studies investigating the prevalence and severity of malocclusions in children are of great relevance to public health programs aimed at orthodontic prevention. Previous epidemiological studies focused mainly on the adolescence age group and reported a prevalence of malocclusion with a high variability, going from 32% to 93%. Aim of our study was to assess the need for orthodontic treatment in a paediatric sample from Southern Italy in order to improve awareness among paediatricians about oral health preventive strategies in pediatric dentistry. Material and Methods The study used the IOTN-DHC index to evaluate the need for orthodontic treatment for several malocclusions (overjet, reverse overjet, overbite, openbite, crossbite) in a sample of 579 children in the 2-9 years age range. Results The most frequently altered occlusal parameter was the overbite (prevalence: 24.5%), while the occlusal anomaly that most frequently presented a need for orthodontic treatment was the crossbite (8.8%). The overall prevalence of need for orthodontic treatment was of 19.3%, while 49% of the sample showed one or more altered occlusal parameters. No statistically significant difference was found between males and females. Conclusions Results from this study support the idea that the establishment of a malocclusion is a gradual process starting at an early age. Effective orthodontic prevention programs should therefore include preschool children being aware paediatricians of the importance of early first dental visit. Key words:Orthodontic treatment, malocclusion, oral health, pediatric dentistry. PMID:28936290
Avendaño, Jorge Enrique; Arbeláez-Cortés, Enrique; Cadena, Carlos Daniel
2017-06-01
Phylogeographic studies seeking to describe biogeographic patterns, infer evolutionary processes, and revise species-level classification should properly characterize the distribution ranges of study species, and thoroughly sample genetic variation across taxa and geography. This is particularly necessary for widely distributed organisms occurring in complex landscapes, such as the Neotropical region. Here, we clarify the geographic range and revisit the phylogeography of the Black-billed Thrush (Turdus ignobilis), a common passerine bird from lowland tropical South America, whose evolutionary relationships and species limits were recently evaluated employing phylogeographic analyses based on partial knowledge of its distribution and incomplete sampling of populations. Our work employing mitochondrial and nuclear DNA sequences sampled all named subspecies and multiple populations across northern South America, and uncovered patterns not apparent in earlier work, including a biogeographic interplay between the Amazon and Orinoco basins and the occurrence of distinct lineages with seemingly different habitat affinities in regional sympatry in the Colombian Amazon. In addition, we found that previous inferences about the affinities and taxonomic status of Andean populations assumed to be allied to populations from the Pantepui region were incorrect, implying that inferred biogeographic and taxonomic scenarios need re-evaluation. We propose a new taxonomic treatment, which recognizes two distinct biological species in the group. Our findings illustrate the importance of sufficient taxon and geographic sampling to reconstruct evolutionary history and to evaluate species limits among Neotropical organisms. Considering the scope of the questions asked, advances in Neotropical phylogeography will often require substantial cross-country scientific collaboration. Copyright © 2017 Elsevier Inc. All rights reserved.
Shield weight optimization using Monte Carlo transport calculations
NASA Technical Reports Server (NTRS)
Jordan, T. M.; Wohl, M. L.
1972-01-01
Outlines are given of the theory used in FASTER-3 Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries. The code has the additional capability of calculating the minimum weight layered unit shield configuration which will meet a specified dose rate constraint. It includes the treatment of geometric regions bounded by quadratic and quardric surfaces with multiple radiation sources which have a specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. Results are presented for sample problems involving primary neutron and both primary and secondary photon transport in a spherical reactor shield configuration. These results include the optimization of the shield configuration.
Go with the winners: a general Monte Carlo strategy
NASA Astrophysics Data System (ADS)
Grassberger, Peter
2002-08-01
We describe a general strategy for sampling configurations from a given distribution, not based on the standard Metropolis (Markov chain) strategy. It uses the fact that nontrivial problems in statistical physics are high dimensional and often close to Markovian. Therefore, configurations are built up in many, usually biased, steps. Due to the bias, each configuration carries its weight which changes at every step. If the bias is close to optimal, all weights are similar and importance sampling is perfect. If not, "population control" is applied by cloning/killing partial configurations with too high/low weight. This is done such that the final (weighted) distribution is unbiased. We apply this method (which is also closely related to diffusion type quantum Monte Carlo) to several problems of polymer statistics, reaction-diffusion models, sequence alignment, and percolation.
Thiéry, Vincent; Trincal, Vincent; Davy, Catherine A
2017-10-01
Ettringite, Ca6 Al2 (SO4 )3 (OH)12 .26H2 O, or C6 AS¯3 H32 as it is known in cement chemistry notation, is a major phase of interest in cement science as an hydration product and in polluted soil treatment since its structure can accommodate with many hazardous cations. Beyond those anthropogenic features, ettringite is first of all a naturally occurring mineral (although rare). An example of its behaviour under the scanning electron microscope and during energy dispersive spectroscopy (EDS) qualitative analysis is presented, based on the study of natural ettringite crystals from the N'Chwaning mine in South Africa. Monte Carlo modelling of the electron-matter interaction zone at various voltages is presented and confronted with actual, observed beam damage on crystals, which burst at the analysis spot. Finally, theoretical energy dispersive spectroscopy spectra for all the ettringite group minerals have been computed as well as Monte Carlo modelling of the electron-matter interaction zone. The knowledge of the estimation of the size of this zone may thus be helpful for the understanding of energy dispersive spectroscopy analysis in cement pastes or ettringite-remediated soils. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Helton, J C; Shiver, A W
1996-02-01
A Monte Carlo procedure for the construction of complementary cumulative distribution functions (CCDFs) for comparison with the U.S. Environmental Protection Agency (EPA) release limits for radioactive waste disposal (40 CFR 191, Subpart B) is described and illustrated with results from a recent performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The Monte Carlo procedure produces CCDF estimates similar to those obtained with importance sampling in several recent PAs for the WIPP. The advantages of the Monte Carlo procedure over importance sampling include increased resolution in the calculation of probabilities for complex scenarios involving drilling intrusions and better use of the necessarily limited number of mechanistic calculations that underlie CCDF construction.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
NASA Astrophysics Data System (ADS)
Velazquez, L.; Castro-Palacio, J. C.
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
Velazquez, L; Castro-Palacio, J C
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Hermida, Ramón C; Ayala, Diana E; Fontao, María J; Mojón, Artemio; Fernández, José R
2013-03-01
estimated asleep SBP mean, the most significant prognostic marker of CVD events, in the range of -21.4 to +23.9 mm Hg. Cox proportional-hazard analyses adjusted for sex, age, diabetes, anemia, and chronic kidney disease revealed comparable hazard ratios (HRs) for mean BP values and sleep-time relative BP decline derived from the original complete 48-h ABPM profiles and those modified to simulate a sampling rate of one BP measurement every 1 or 2 h. The HRs, however, were markedly overestimated for SBP and underestimated for DBP when the duration of ABPM was reduced from 48 to only 24 h. This study on subjects evaluated prospectively by 48-h ABPM documents that reproducibility in the estimates of prognostic ABPM-derived parameters depends markedly on duration of monitoring, and only to a lesser extent on sampling rate. The HR of CVD events associated with increased ambulatory BP is poorly estimated by relying on 24-h ABPM, indicating ABPM for only 24 h may be insufficient for proper diagnosis of hypertension, identification of dipping status, evaluation of treatment efficacy, and, most important, CVD risk stratification.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Glavin, D. P.; Brinckerhoff, W. B.; Conrad, P. G.; Dworkin, J. P.; Eigenbrode, J. L.; Getty, S.; Mahaffy, P. R.
2013-12-01
The search for evidence of life on Mars and elsewhere will continue to be one of the primary goals of NASA's robotic exploration program for decades to come. NASA and ESA are currently planning a series of robotic missions to Mars with the goal of understanding its climate, resources, and potential for harboring past or present life. One key goal will be the search for chemical biomarkers including organic compounds important in life on Earth and their geological forms. These compounds include amino acids, the monomer building blocks of proteins and enzymes, nucleobases and sugars which form the backbone of DNA and RNA, and lipids, the structural components of cell membranes. Many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1], though, their molecular characteristics may distinguish a biological source [2]. It is possible that in situ instruments may reveal such characteristics, however, return of the right samples to Earth (i.e. samples containing chemical biosignatures or having a high probability of biosignature preservation) would enable more intensive laboratory studies using a broad array of powerful instrumentation for bulk characterization, molecular detection, isotopic and enantiomeric compositions, and spatially resolved chemistry that may be required for confirmation of extant or extinct life on Mars or elsewhere. In this presentation we will review the current in situ analytical capabilities and strategies for the detection of organics on the Mars Science Laboratory (MSL) rover using the Sample Analysis at Mars (SAM) instrument suite [3] and discuss how both future advanced in situ instrumentation [4] and laboratory measurements of samples returned from Mars and other targets of astrobiological interest including the icy moons of Jupiter and Saturn will help advance our understanding of chemical biosignatures in the Solar System. References: [1] Cronin, J. R and Chang S. (1993
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Kalos, M. H.; Pederiva, F.
1998-12-01
We review the fundamental challenge of fermion Monte Carlo for continuous systems, the "sign problem". We seek that eigenfunction of the many-body Schriodinger equation that is antisymmetric under interchange of the coordinates of pairs of particles. We describe methods that depend upon the use of correlated dynamics for pairs of correlated walkers that carry opposite signs. There is an algorithmic symmetry between such walkers that must be broken to create a method that is both exact and as effective as for symmetric functions, In our new method, it is broken by using different "guiding" functions for walkers of opposite signs, and a geometric correlation between steps of their walks, With a specific process of cancellation of the walkers, overlaps with antisymmetric test functions are preserved. Finally, we describe the progress in treating free-fermion systems and a fermion fluid with 14 ^{3}He atoms.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003
This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Folk, R.L.; Lynch, F.L.
1997-05-01
Bacterial textures are present on clay minerals in Oligocene Frio Formation sandstones from the subsurface of the Corpus Christi area, Texas. In shallower samples, beads 0.05--0.1 {micro}m in diameter rim the clay flakes; at greater depth these beads become more abundant and eventually are perched on the ends of clay filaments of the same diameter. The authors believe that the beads are nannobacteria (dwarf forms) that have precipitated or transformed the clay minerals during burial of the sediments. Rosettes of chlorite also contain, after HCl etching, rows of 0.1 {micro}m bodies. In contrast, kaolinite shows no evidence of bacterial precipitation. The authors review other examples of bacterially precipitated clay minerals. A danger present in interpretation of earlier work (and much work of others) is the development of nannobacteria-looking artifacts caused by gold coating times in excess of one minute; the authors strongly recommend a 30-second coating time. Bacterial growth of clay minerals may be a very important process both in the surface and subsurface.
Kataoka, Mika; Okamoto, Yasuyuki
2009-02-01
The objective of this symposium was to promote effective communication between medical doctors (MD) and medical technologists (MT) for efficient team-based medical treatment. We analyzed a model patient with disseminated intravascular coagulation (DIC) to demonstrate our strategy, primarily as clinical laboratory hematologists. To assess the response of the clinical central laboratory to severe septic DIC, questionnaires on the performance of laboratory tests for DIC at night were sent to the laboratories of six hospitals in the Nara area. Extra tests other than those fixed for the emergency room were carried out in many laboratories in response to requests from the doctors. This tendency was more marked in smaller sized laboratories; therefore, the level of communication was better in these smaller laboratories. Forty MTs filled out the questionnaires on the blood coagulation test and influence of sampling and others, especially pertaining to the night shift, and their responses were relatively favorable, but more active approaches and information were needed even if their subspecialty was not clinical hematology. In our cases of thrombotic thrombocytopenic purpura and May-Hegglin anomaly, active and specific laboratory-based participation contributes to the diagnosis and treatment. In conclusion, the most important point is that MTs and MDs show respect for each other and communicate cordially, because our final mutual goal is the recovery of the patient.
Monte Carlo simulations and dosimetric studies of an irradiation facility
NASA Astrophysics Data System (ADS)
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
Multicanonical Monte Carlo for Simulation of Optical Links
NASA Astrophysics Data System (ADS)
Bononi, Alberto; Rusch, Leslie A.
Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.
Diffusion quantum Monte Carlo for atomic spin-orbit interactions
NASA Astrophysics Data System (ADS)
Zhu, Minyi; Guo, Shi; Mitas, Lubos
2013-03-01
We present a generalization of the quantum Monte Carlo methods (QMC) for dealing with the spin-orbit (SO) effects in heavy atom systems. For heavy elements, the spin-orbit interaction plays an important role in electronic structure calculation and becomes comparable to the exchange, correlations and other effects. We implement relativistic lj-dependent effective core potentials for valence-only calculations. Due to the spin-dependent Hamiltonian, the antisymmetric trial wave functions are constructed from two-component spinors in jj-coupling so that the states are labeled by its total angular momentum J. A new spin representation is proposed which is based on summation over all possible spin states without generating large fluctutations and the fixed-phase approximation is used to avoid the sign problem. Our approach is different from the recent idea based on rotating (sampling) the spinors according to the action of the spin-orbit operator. We demonstrate the approach on heavy atom and small molecular systems in both variational and diffusion Monte Carlo methods and we calculate both ground and excited states. The results show very good agreement with independent methods and experimental results within the accuracy of the used effective core potentials. Research supported by NSF and ARO.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Monte Carlo algorithm for free energy calculation.
Bi, Sheng; Tong, Ning-Hua
2015-07-01
We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.
2015-01-01
criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2017-01-01
Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids
Chang, C.; Morales, M. A.
2016-11-10
We propose a method of implementing projected wave functions for second-quantized auxiliary- field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. AS such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.
Region Covariance Matrices for Object Tracking in Quasi-Monte Carlo Filter
NASA Astrophysics Data System (ADS)
Ding, Xiaofeng; Xu, Lizhong; Wang, Xin; Lv, Guofang
Region covariance matrices (RCMs), categorized as a matrix-form feature in a low dimension, fuse multiple different image features which might be correlated. The region covariance matrices-based trackers are robust and versatile with a modest computational cost. In this paper, under the Bayesian inference framework, a region covariance matrices-based quasi-Monte Carlo filter tracker is proposed. The RCMs are used to model target appearances. The dissimilarity metric of the RCMs are measured on Riemannian manifolds. Based on the current object location and the prior knowledge, the possible locations of the object candidates in the next frame are predicted by combine both sequential quasi-Monte Carlo (SQMC) and a given importance sampling (IS) techniques. Experiments performed on different type of image sequence show our approach is robust and effective.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
NASA Astrophysics Data System (ADS)
Drosg, M.
2011-12-01
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of 3He(n,n)3He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Monte Carlo Methods in the Physical Sciences
Kalos, M H
2007-06-06
I will review the role that Monte Carlo methods play in the physical sciences. They are very widely used for a number of reasons: they permit the rapid and faithful transformation of a natural or model stochastic process into a computer code. They are powerful numerical methods for treating the many-dimensional problems that derive from important physical systems. Finally, many of the methods naturally permit the use of modern parallel computers in efficient ways. In the presentation, I will emphasize four aspects of the computations: whether or not the computation derives from a natural or model stochastic process; whether the system under study is highly idealized or realistic; whether the Monte Carlo methodology is straightforward or mathematically sophisticated; and finally, the scientific role of the computation.
Quantum Monte Carlo applied to solids
Shulenburger, Luke; Mattsson, Thomas R.
2013-12-01
We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.
Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J
2016-01-01
Background Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. Methods We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women non-compliant to screening within a 5-year or 10-year period under two scenarios: A) self-sampling respondents had moderate under-screening histories, or B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The ‘most cost-effective’ strategy was identified as the strategy just below $100,000 per QALY gained. Results Mailing self-sampling device kits to all women non-compliant to screening within a 5-year or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, ‘10-yearly self-sampling’ is preferred ($95,500 per QALY gained) if ‘5-yearly self-sampling’ could only attract moderate under-screeners; however, ‘5-yearly self-sampling’ is preferred if this strategy could additionally attract severe under-screeners. Conclusions Targeted self-sampling of non-compliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. Impact The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. PMID:27624639
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Castro, André L; Dias, Mário; Reis, Flávio; Teixeira, Helena M
2014-10-01
Gamma-Hydroxybutyric Acid (GHB) is an endogenous compound with a story of clinical use, since the 1960's. However, due to its secondary effects, it has become a controlled substance, entering the illicit market for recreational and "dance club scene" use, muscle enhancement purposes and drug-facilitated sexual assaults. Its endogenous context can bring some difficulties when interpreting, in a forensic context, the analytical values achieved in biological samples. This manuscript reviewed several crucial aspects related to GHB forensic toxicology evaluation, such as its post-mortem behaviour in biological samples; endogenous production values, whether in in vivo and in post-mortem samples; sampling and storage conditions (including stability tests); and cut-off reference values evaluation for different biological samples, such as whole blood, plasma, serum, urine, saliva, bile, vitreous humour and hair. This revision highlights the need of specific sampling care, storage conditions, and cut-off reference values interpretation in different biological samples, essential for proper practical application in forensic toxicology. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
McCreesh, Nicky; Tarsh, Matilda Nadagire; Seeley, Janet; Katongole, Joseph; White, Richard G
2013-01-01
Respondent-driven sampling (RDS) is a widely-used variant of snowball sampling. Respondents are selected not from a sampling frame, but from a social network of existing members of the sample. Incentives are provided for participation and for the recruitment of others. Ethical and methodological criticisms have been raised about RDS. Our purpose was to evaluate whether these criticisms were justified. In this study RDS was used to recruit male household heads in rural Uganda. We investigated community members’ understanding and experience of the method, and explored how these may have affected the quality of the RDS survey data. Our findings suggest that because participants recruit participants, the use of RDS in medical research may result in increased difficulties in gaining informed consent, and data collected using RDS may be particularly susceptible to bias due to differences in the understanding of key concepts between researchers and members of the community. PMID:24273435
Vargas, S L; Ponce, C; Bustamante, R; Calderón, E; Nevez, G; De Armas, Y; Matos, O; Miller, R F; Gallo, M J
2017-06-05
To understand the epidemiological significance of Pneumocystis detection in a lung tissue sample of non-immunosuppressed individuals, we examined sampling procedures, laboratory methodology, and patient characteristics of autopsy series reported in the literature. Number of tissue specimens, DNA-extraction procedures, age and underlying diagnosis highly influence yield and are critical to understand yield differences of Pneumocystis among reports of pulmonary colonization in immunocompetent individuals.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
Hampton, Jerrad; Doostan, Alireza
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.
NASA Astrophysics Data System (ADS)
Mendoza-Borunda, R.; Herrero-Bervera, E.; Canon-Tapia, E.
2012-12-01
Recent work has suggested the convenience of dyke sampling along several profiles parallel and perpendicular to its walls to increase the probability of determining a geologically significant magma flow direction using anisotropy of magnetic susceptibility (AMS) measurements. For this work, we have resampled in great detail some dykes from the Kapaa Quarry, Koolau Volcano in Oahu Hawaii, comparing the results of a more detailed sampling scheme with those obtained previously with a traditional sampling scheme. In addition to the AMS results we will show magnetic properties, including magnetic grain sizes, Curie points and AMS measured at two different frequencies on a new MFK1-FA Spinner Kappabridge. Our results thus far provide further empirical evidence supporting the occurrence of a definite cyclic fabric acquisition during the emplacement of at least some of the dykes. This cyclic behavior can be captured using the new sampling scheme, but might be easily overlooked if the simple, more traditional sampling scheme is used. Consequently, previous claims concerning the advantages of adopting a more complex sampling scheme are justified since this approach can serve to reduce the uncertainty in the interpretation of AMS results.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Huang, Xiao-Lan; Zhang, Jia-Zhong
2008-10-19
Acidic persulfate oxidation is one of the most common procedures used to digest dissolved organic phosphorus compounds in water samples for total dissolved phosphorus determination. It has been reported that the rates of phosphoantimonylmolybdenum blue complex formation were significantly reduced in the digested sample matrix. This study revealed that the intermediate products of persulfate oxidation, not the slight change in pH, cause the slowdown of color formation. This effect can be remedied by adjusting digested samples pH to a near neural to decompose the intermediate products. No disturbing effects of chlorine on the phosphoantimonylmolybdenum blue formation in seawater were observed. It is noted that the modification of mixed reagent recipe cannot provide near neutral pH for the decomposition of the intermediate products of persulfate oxidation. This study provides experimental evidence not only to support the recommendation made in APHA standard methods that the pH of the digested sample must be adjusted to within a narrow range of sample, but also to improve the understanding of role of residue from persulfate decomposition on the subsequent phosphoantimonylmolybdenum blue formation.
Jurek, Anne M; Maldonado, George; Greenland, Sander
2013-03-01
Special care must be taken when adjusting for outcome misclassification in case-control data. Basic adjustment formulas using either sensitivity and specificity or predictive values (as with external validation data) do not account for the fact that controls are sampled from a much larger pool of potential controls. A parallel problem arises in surveys and cohort studies in which participation or loss is outcome related. We review this problem and provide simple methods to adjust for outcome misclassification in case-control studies, and illustrate the methods in a case-control birth certificate study of cleft lip/palate and maternal cigarette smoking during pregnancy. Adjustment formulas for outcome misclassification that ignore case-control sampling can yield severely biased results. In the data we examined, the magnitude of error caused by not accounting for sampling is small when population sensitivity and specificity are high, but increases as (1) population sensitivity decreases, (2) population specificity decreases, and (3) the magnitude of the differentiality increases. Failing to account for case-control sampling can result in an odds ratio adjusted for outcome misclassification that is either too high or too low. One needs to account for outcome-related selection (such as case-control sampling) when adjusting for outcome misclassification using external information. Copyright © 2013 Elsevier Inc. All rights reserved.
Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams
NASA Astrophysics Data System (ADS)
Ohya, Kaoru
2017-03-01
The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.
Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications
NASA Astrophysics Data System (ADS)
Li, Fusheng
Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Intergenerational Correlation in Monte Carlo k-Eigenvalue Calculation
Ueki, Taro
2002-06-15
This paper investigates intergenerational correlation in the Monte Carlo k-eigenvalue calculation of a neutron effective multiplicative factor. To this end, the exponential transform for path stretching has been applied to large fissionable media with localized highly multiplying regions because in such media an exponentially decaying shape is a rough representation of the importance of source particles. The numerical results show that the difference between real and apparent variances virtually vanishes for an appropriate value of the exponential transform parameter. This indicates that the intergenerational correlation of k-eigenvalue samples could be eliminated by the adjoint biasing of particle transport. The relation between the biasing of particle transport and the intergenerational correlation is therefore investigated in the framework of collision estimators, and the following conclusion has been obtained: Within the leading order approximation with respect to the number of histories per generation, the intergenerational correlation vanishes when immediate importance is constant, and the immediate importance under simulation can be made constant by the biasing of particle transport with a function adjoint to the source neutron's distribution, i.e., the importance over all future generations.
Perturbation Monte Carlo methods for tissue structure alterations.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome
2013-01-01
This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.
An improved method for treating Monte Carlo-diffusion interfaces
Densmore, J. D.
2004-01-01
Discrete Diffusion Monte Carlo (DDMC) has been suggested as a technique for increasing the efficiency of Monte Carlo simulations in diffusive media. In this technique, Monte Carlo particles travel discrete steps between spatial cells according to a discretized diffusion equation. An important part of the DDMC method is the treatment of the interface between a transport region, where standard Monte Carlo is used, and a diffusive region, where DDMC is employed. Previously developed DDMC methods use the Marshak boundary condition at transport diffusion-interfaces, and thus produce incorrect results if the Monte Carlo-calculated angular flux incident on the interface surface is anisotropic. In this summary we present a new interface method based on the asymptotic diffusion-limit boundary condition, which is able to produce accurate solutions if the incident angular flux is anisotropic. We show that this new interface technique has a simple Monte Carlo interpretation, and can be used in conjunction with the existing DDMC method. With a set of numerical simulations, we demonstrate that this asymptotic interface method is much more accurate than the previously developed Marshak interface method.
Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations
Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C
2011-01-01
It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.
Parallel domain decomposition methods in fluid models with Monte Carlo transport
Alme, H.J.; Rodrigues, G.H.; Zimmerman, G.B.
1996-12-01
To examine the domain decomposition code coupled Monte Carlo-finite element calculation, it is important to use a domain decomposition that is suitable for the individual models. We have developed a code that simulates a Monte Carlo calculation ( ) on a massively parallel processor. This code is used to examine the load balancing behavior of three domain decomposition ( ) for a Monte Carlo calculation. Results are presented.
Convolution/superposition using the Monte Carlo method.
Naqvi, Shahid A; Earl, Matthew A; Shepard, David M
2003-07-21
The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 x 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 x 4 x 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose
Convolution/superposition using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Naqvi, Shahid A.; Earl, Matthew A.; Shepard, David M.
2003-07-01
The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 × 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 × 4 × 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose
A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2008-01-01
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
NASA Astrophysics Data System (ADS)
Li, Xiang
2016-10-01
Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.
MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem
Kelly, D. J.; Sutton, T. M.; Wilson, S. C.
2012-07-01
Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Carrizo, Daniel; Chevallier, Olivier P; Woodside, Jayne V; Brennan, Sarah F; Cantwell, Marie M; Cuskelly, Geraldine; Elliott, Christopher T
2017-02-01
Persistent organic pollutants (POPs) are distributed globally and are associated with adverse health effects in humans. A study combining gas chromatography-mass spectrometry (GC-MS), high resolution mass spectrometry (UPLC-QTof-MS) and chemometrics for the analysis of adult human serum samples was undertaken. Levels of serum POPs found were in the low range of what has been reported in similar populations across Europe (median 33.84 p, p'-DDE, 3.02 HCB, 83.55 β-HCH, 246.62 PCBs ng/g lipids). Results indicated that compounds concentrations were significantly different between the two groups of POPs exposure (high vs low) and classes (DDE, β-HCH, HCB, PCBs). Using orthogonal partial last-squares discriminant analysis (OPLS-DA), multivariate models were created for both modes of acquisition and POPs classes, explaining the maximum amount of variation between sample groups (positive mode R2 = 98-90%; Q2 = 94-75%; root mean squared error of validation (RMSEV) = 12-20%: negative mode R2 = 98-91%; Q2 = 94-81%; root mean squared error of validation (RMSEV) = 10-19%. In the serum samples analyzed, a total 3076 and 3121 ions of interest were detected in positive and negative mode respectively. Of these, 40 were found to be significantly different (p < 0.05) between exposure levels. Sphingolipids and Glycerophospholipids lipids families were identified and found significantly (p < 0.05) different between high and low POPs exposure levels. This study has shown that the elucidation of metabolomic fingerprints may have the potential to be classified as biomarkers of POPs exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Xianming; Wania, Frank
2012-09-04
Air sampling based on diffusion of target molecules from the atmospheric gas phase to passive sampling media (PSMs) is currently modeled using the two-film approach. Originally developed to describe chemical exchange between air and water, it assumes a uniform chemical distribution in the bulk phases on either side of the interfacial films. Although such an assumption may be satisfied when modeling uptake in PSMs in which chemicals have high mobility, its validity is questionable for PSMs such as polyurethane foam disks and XAD-resin packed mesh cylinders. Mass transfer of chemicals through the PSMs may be subject to a large resistance because of the low mass fraction of gas-phase chemicals in the pores, where diffusion occurs. Here we present a model that does not assume that chemicals distribute uniformly in the PSMs. It describes the sequential diffusion of vapors through a stagnant air-side boundary layer and the PSM pores, and the reversible sorption onto the PSM. Sensitivity analyses reveal the potential influence of the latter two processes on passive sampling rates (PSRs) unless the air-side boundary layer is assumed to be extremely thick (i.e., representative of negligible wind speeds). The model also reveals that the temperature dependence of PSRs, differences in PSRs between different compounds, and a two-stage uptake, all observed in field calibrations, can be attributed to those mass transfer processes within the PSM. The kinetics of chemical sorption to the PSM from the gas phase in the macro-pores is a knowledge gap that needs to be addressed before the model can be applied to specific compounds.
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Barlow, Daniel E; Biffinger, Justin C; Cockrell-Zugell, Allison L; Lo, Michael; Kjoller, Kevin; Cook, Debra; Lee, Woo Kyung; Pehrsson, Pehr E; Crookes-Goodson, Wendy J; Hung, Chia-Suei; Nadeau, Lloyd J; Russell, John N
2016-08-02
AFM-IR is a combined atomic force microscopy-infrared spectroscopy method that shows promise for nanoscale chemical characterization of biological-materials interactions. In an effort to apply this method to quantitatively probe mechanisms of microbiologically induced polyurethane degradation, we have investigated monolayer clusters of ∼200 nm thick Pseudomonas protegens Pf-5 bacteria (Pf) on a 300 nm thick polyether-polyurethane (PU) film. Here, the impact of the different biological and polymer mechanical properties on the thermomechanical AFM-IR detection mechanism was first assessed without the additional complication of polymer degradation. AFM-IR spectra of Pf and PU were compared with FTIR and showed good agreement. Local AFM-IR spectra of Pf on PU (Pf-PU) exhibited bands from both constituents, showing that AFM-IR is sensitive to chemical composition both at and below the surface. One distinct difference in local AFM-IR spectra on Pf-PU was an anomalous ∼4× increase in IR peak intensities for the probe in contact with Pf versus PU. This was attributed to differences in probe-sample interactions. In particular, significantly higher cantilever damping was observed for probe contact with PU, with a ∼10× smaller Q factor. AFM-IR chemical mapping at single wavelengths was also affected. We demonstrate ratioing of mapping data for chemical analysis as a simple method to cancel the extreme effects of the variable probe-sample interactions.
Köber, Christin; Habermas, Tilmann
2017-03-23
Considering life stories as the most individual layer of personality (McAdams, 2013) implies that life stories, similar to personality traits, exhibit some stability throughout life. Although stability of personality traits has been extensively investigated, only little is known about the stability of life stories. We therefore tested the influence of age, of the proportion of normative age-graded life events, and of global text coherence on the stability of the most important memories and of brief entire life narratives as 2 representations of the life story. We also explored whether normative age-graded life events form more stable parts of life narratives. In a longitudinal life span study covering up to 3 measurements across 8 years and 6 age groups (N = 164) the stability of important memories and of entire life narratives was measured as the percentage of events and narrative segments which were repeated in later tellings. Stability increased between ages 8 and 24, leveling off in middle adulthood. Beyond age, stability of life narratives was also predicted by proportion of normative age-graded life events and by causal-motivational text coherence in younger participants. Memories of normative developmental and social transitional life events were more stable than other memories. Stability of segments of life narratives exceeded the stability of single most important memories. Findings are discussed in terms of cognitive, personality, and narrative psychology and point to research questions in each of these fields. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Monte Carlo simulation of a clearance box monitor used for nuclear power plant decommissioning.
Bochud, François O; Laedermann, Jean-Pascal; Bailat, Claude J; Schuler, Christoph
2009-05-01
When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
The Metropolis Monte Carlo Method in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-11-01
A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.