Science.gov

Sample records for importance sampling monte

  1. Path integral Monte Carlo with importance sampling for excitons interacting with an arbitrary phonon bath.

    PubMed

    Shim, Sangwoo; Aspuru-Guzik, Alán

    2012-12-14

    The reduced density matrix of excitons coupled to a phonon bath at a finite temperature is studied using the path integral Monte Carlo method. Appropriate choices of estimators and importance sampling schemes are crucial to the performance of the Monte Carlo simulation. We show that by choosing the population-normalized estimator for the reduced density matrix, an efficient and physically-meaningful sampling function can be obtained. In addition, the nonadiabatic phonon probability density is obtained as a byproduct during the sampling procedure. For importance sampling, we adopted the Metropolis-adjusted Langevin algorithm. The analytic expression for the gradient of the target probability density function associated with the population-normalized estimator cannot be obtained in closed form without a matrix power series. An approximated gradient that can be efficiently calculated is explored to achieve better computational scaling and efficiency. Application to a simple one-dimensional model system from the previous literature confirms the correctness of the method developed in this manuscript. The displaced harmonic model system within the single exciton manifold shows the numerically exact temperature dependence of the coherence and population of the excitonic system. The sampling scheme can be applied to an arbitrary anharmonic environment, such as multichromophoric systems embedded in the protein complex. The result of this study is expected to stimulate further development of real time propagation methods that satisfy the detailed balance condition for exciton populations. PMID:23249075

  2. Improved importance sampling for Monte Carlo simulation of time-domain optical coherence tomography.

    PubMed

    Lima, Ivan T; Kalra, Anshul; Sherif, Sherif S

    2011-01-01

    We developed an importance sampling based method that significantly speeds up the calculation of the diffusive reflectance due to ballistic and to quasi-ballistic components of photons scattered in turbid media: Class I diffusive reflectance. These components of scattered photons make up the signal in optical coherence tomography (OCT) imaging. We show that the use of this method reduces the computation time of this diffusive reflectance in time-domain OCT by up to three orders of magnitude when compared with standard Monte Carlo simulation. Our method does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This fast Monte Carlo calculation of the Class I diffusive reflectance can be used as a tool to further study the physical process governing OCT signals, e.g., obtain the statistics of the depth-scan, including the effects of multiple scattering of light, in OCT. This is an important prerequisite to future research to increase penetration depth and to improve image extraction in OCT. PMID:21559120

  3. Monte Carlo importance sampling for the MCNP{trademark} general source

    SciTech Connect

    Lichtenstein, H.

    1996-01-09

    Research was performed to develop an importance sampling procedure for a radiation source. The procedure was developed for the MCNP radiation transport code, but the approach itself is general and can be adapted to other Monte Carlo codes. The procedure, as adapted to MCNP, relies entirely on existing MCNP capabilities. It has been tested for very complex descriptions of a general source, in the context of the design of spent-reactor-fuel storage casks. Dramatic improvements in calculation efficiency have been observed in some test cases. In addition, the procedure has been found to provide an acceleration to acceptable convergence, as well as the benefit of quickly identifying user specified variance-reduction in the transport that effects unstable convergence.

  4. Towards an Effective Importance Sampling in Monte Carlo Simulations of a System with a Complex Action

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, K.; Azuma, T.; Nishimura, J.

    The sign problem is a notorious problem, which occurs in Monte Carlo simulations of a system with a partition function whose integrand is not positive. One way to simulate such a system is to use the factorization method where one enforces sampling in the part of the configuration space which gives important contribution to the partition function. This is accomplished by using constraints on some observables chosen appropriately and minimizing the free energy associated with their joint distribution functions. These observables are maximally correlated with the complex phase. Observables not in this set essentially decouple from the phase and can be calculated without the sign problem in the corresponding "microcanonical" ensemble. These ideas are applied on a simple matrix model with very strong sign problem and the results are found to be consistent with analytic calculations using the Gaussian Expansion Method.

  5. Importance sampling : promises and limitations.

    SciTech Connect

    West, Nicholas J.; Swiler, Laura Painton

    2010-04-01

    Importance sampling is an unbiased sampling method used to sample random variables from different densities than originally defined. These importance sampling densities are constructed to pick 'important' values of input random variables to improve the estimation of a statistical response of interest, such as a mean or probability of failure. Conceptually, importance sampling is very attractive: for example one wants to generate more samples in a failure region when estimating failure probabilities. In practice, however, importance sampling can be challenging to implement efficiently, especially in a general framework that will allow solutions for many classes of problems. We are interested in the promises and limitations of importance sampling as applied to computationally expensive finite element simulations which are treated as 'black-box' codes. In this paper, we present a customized importance sampler that is meant to be used after an initial set of Latin Hypercube samples has been taken, to help refine a failure probability estimate. The importance sampling densities are constructed based on kernel density estimators. We examine importance sampling with respect to two main questions: is importance sampling efficient and accurate for situations where we can only afford small numbers of samples? And does importance sampling require the use of surrogate methods to generate a sufficient number of samples so that the importance sampling process does increase the accuracy of the failure probability estimate? We present various case studies to address these questions.

  6. Monte carlo sampling of fission multiplicity.

    SciTech Connect

    Hendricks, J. S.

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  7. Multiple importance sampling for PET.

    PubMed

    Szirmay-Kalos, Laszló; Magdics, Milán; Tóth, Balázs

    2014-04-01

    This paper proposes the application of multiple importance sampling in fully 3-D positron emission tomography to speed up the iterative reconstruction process. The proposed method combines the results of lines of responses (LOR) driven and voxel driven projections keeping their advantages, like importance sampling, performance and parallel execution on graphics processing units. Voxel driven methods can focus on point like features while LOR driven approaches are efficient in reconstructing homogeneous regions. The theoretical basis of the combination is the application of the mixture of the samples generated by the individual importance sampling methods, emphasizing a particular method where it is better than others. The proposed algorithms are built into the Tera-tomo system. PMID:24710165

  8. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  9. A pure-sampling quantum Monte Carlo algorithm

    SciTech Connect

    Ospadov, Egor; Rothstein, Stuart M.

    2015-01-14

    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  10. Annealed Importance Sampling for Neural Mass Models

    PubMed Central

    Penny, Will; Sengupta, Biswa

    2016-01-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  11. Annealed Importance Sampling for Neural Mass Models.

    PubMed

    Penny, Will; Sengupta, Biswa

    2016-03-01

    Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606

  12. Cool walking: a new Markov chain Monte Carlo sampling method.

    PubMed

    Brown, Scott; Head-Gordon, Teresa

    2003-01-15

    Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures. PMID:12483676

  13. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  14. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those

  15. Adaptive sample map for Monte Carlo ray tracing

    NASA Astrophysics Data System (ADS)

    Teng, Jun; Luo, Lixin; Chen, Zhibo

    2010-07-01

    Monte Carlo ray tracing algorithm is widely used by production quality renderers to generate synthesized images in films and TV programs. Noise artifact exists in synthetic images generated by Monte Carlo ray tracing methods. In this paper, a novel noise artifact detection and noise level representation method is proposed. We first apply discrete wavelet transform (DWT) on a synthetic image; the high frequency sub-bands of the DWT result encode the noise information. The sub-bands coefficients are then combined to generate a noise level description of the synthetic image, which is called noise map in the paper. This noise map is then subdivided into blocks for robust noise level metric calculation. Increasing the samples per pixel in Monte Carlo ray tracer can reduce the noise of a synthetic image to visually unnoticeable level. A noise-to-sample number mapping algorithm is thus performed on each block of the noise map, higher noise value is mapped to larger sample number, and lower noise value is mapped to smaller sample number, the result of mapping is called sample map. Each pixel in a sample map can be used by Monte Carlo ray tracer to reduce the noise level in the corresponding block of pixels in a synthetic image. However, this block based scheme produces blocky artifact as appeared in video and image compression algorithms. We use Gaussian filter to smooth the sample map, the result is adaptive sample map (ASP). ASP serves two purposes in rendering process; its statistics information can be used as noise level metric in synthetic image, and it can also be used by a Monte Carlo ray tracer to refine the synthetic image adaptively in order to reduce the noise to unnoticeable level but with less rendering time than the brute force method.

  16. Monte Carlo Sampling of Negative-temperature Plasma States

    SciTech Connect

    John A. Krommes; Sharadini Rath

    2002-07-19

    A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.

  17. The alias method: A fast, efficient Monte Carlo sampling technique

    SciTech Connect

    Rathkopf, J.A.; Edwards, A.L. ); Smidt, R.K. )

    1990-11-16

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 2 figs., 1 tab.

  18. Extending the alias Monte Carlo sampling method to general distributions

    SciTech Connect

    Edwards, A.L.; Rathkopf, J.A. ); Smidt, R.K. )

    1991-01-07

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs.

  19. Sample Size Requirements in Single- and Multiphase Growth Mixture Models: A Monte Carlo Simulation Study

    ERIC Educational Resources Information Center

    Kim, Su-Young

    2012-01-01

    Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…

  20. A modified Monte Carlo 'local importance function transform' method

    SciTech Connect

    Keady, K. P.; Larsen, E. W.

    2013-07-01

    The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)

  1. Continuous-time quantum Monte Carlo using worm sampling

    NASA Astrophysics Data System (ADS)

    Gunacker, P.; Wallerberger, M.; Gull, E.; Hausoel, A.; Sangiovanni, G.; Held, K.

    2015-10-01

    We present a worm sampling method for calculating one- and two-particle Green's functions using continuous-time quantum Monte Carlo simulations in the hybridization expansion (CT-HYB). Instead of measuring Green's functions by removing hybridization lines from partition function configurations, as in conventional CT-HYB, the worm algorithm directly samples the Green's function. We show that worm sampling is necessary to obtain general two-particle Green's functions which are not of density-density type and that it improves the sampling efficiency when approaching the atomic limit. Such two-particle Green's functions are needed to compute off-diagonal elements of susceptibilities and occur in diagrammatic extensions of the dynamical mean-field theory and in efficient estimators for the single-particle self-energy.

  2. Experimental validation of plutonium ageing by Monte Carlo correlated sampling

    SciTech Connect

    Litaize, O.; Bernard, D.; Santamarina, A.

    2006-07-01

    Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)

  3. A flexible importance sampling method for integrating subgrid processes

    NASA Astrophysics Data System (ADS)

    Raut, E. K.; Larson, V. E.

    2016-01-01

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.

  4. A flexible importance sampling method for integrating subgrid processes

    DOE PAGESBeta

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  5. A flexible importance sampling method for integrating subgrid processes

    DOE PAGESBeta

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  6. A flexible importance sampling method for integrating subgrid processes

    SciTech Connect

    Raut, E. K.; Larson, V. E.

    2016-01-01

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales.

    The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories.

    The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.

  7. Reactive Monte Carlo sampling with an ab initio potential

    NASA Astrophysics Data System (ADS)

    Leiding, Jeff; Coe, Joshua D.

    2016-05-01

    We present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). We find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the "rare-event" character of chemical reactions.

  8. Importance sampling. I. Computing multimodel p values in linkage analysis

    SciTech Connect

    Kong, A.; Frigge, M.; Irwin, M.; Cox, N. )

    1992-12-01

    In linkage analysis, when the lod score is maximized over multiple genetic models, standard asymptotic approximation of the significance level does not apply. Monte Carlo methods can be used to estimate the p value, but procedures currently used are extremely inefficient. The authors propose a Monte Carlo procedure based on the concept of importance sampling, which can be thousands of times more efficient than current procedures. With a reasonable amount of computing time, extremely accurate estimates of the p values can be obtained. Both theoretical results and an example of maturity-onset diabetes of the young (MODY) are presented to illustrate the efficiency performance of their method. Relations between single-model and multimodel p values are explored. The new procedure is also used to investigate the performance of asymptotic approximations in a single model situation. 22 refs., 6 figs., 1 tab.

  9. Importance Sampling of Word Patterns in DNA and Protein Sequences

    PubMed Central

    Chan, Hock Peng; Chen, Louis H.Y.

    2010-01-01

    Abstract Monte Carlo methods can provide accurate p-value estimates of word counting test statistics and are easy to implement. They are especially attractive when an asymptotic theory is absent or when either the search sequence or the word pattern is too short for the application of asymptotic formulae. Naive direct Monte Carlo is undesirable for the estimation of small probabilities because the associated rare events of interest are seldom generated. We propose instead efficient importance sampling algorithms that use controlled insertion of the desired word patterns on randomly generated sequences. The implementation is illustrated on word patterns of biological interest: palindromes and inverted repeats, patterns arising from position-specific weight matrices (PSWMs), and co-occurrences of pairs of motifs. PMID:21128856

  10. CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc

    SciTech Connect

    Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.

    2004-12-01

    CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.

  11. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    SciTech Connect

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.

  12. Estimation of cosmological parameters using adaptive importance sampling

    SciTech Connect

    Wraith, Darren; Kilbinger, Martin; Benabed, Karim; Prunet, Simon; Cappe, Olivier; Fort, Gersende; Cardoso, Jean-Francois; Robert, Christian P.

    2009-07-15

    We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.

  13. Receiver function inversion by trans-dimensional Monte Carlo sampling

    NASA Astrophysics Data System (ADS)

    Agostinetti, N. Piana; Malinverno, A.

    2010-05-01

    A key question in the analysis of an inverse problem is the quantification of the non-uniqueness of the solution. Non-uniqueness arises when properties of an earth model can be varied without significantly worsening the fit to observed data. In most geophysical inverse problems, subsurface properties are parameterized using a fixed number of unknowns, and non-uniqueness has been tackled with a Bayesian approach by determining a posterior probability distribution in the parameter space that combines `a priori' information with information contained in the observed data. However, less consideration has been given to the question whether the data themselves can constrain the model complexity, that is the number of unknowns needed to fit the observations. Answering this question requires solving a trans-dimensional inverse problem, where the number of unknowns is an unknown itself. Recently, the Bayesian approach to parameter estimation has been extended to quantify the posterior probability of the model complexity (the number of model parameters) with a quantity called `evidence'. The evidence can be hard to estimate in a non-linear problem; a practical solution is to use a Monte Carlo sampling algorithm that samples models with different number of unknowns in proportion to their posterior probability. This study presents a method to solve in trans-dimensional fashion the non-linear inverse problem of inferring 1-D subsurface elastic properties from teleseismic receiver function data. The Earth parameterization consists of a variable number of horizontal layers, where little is assumed a priori about the elastic properties, the number of layers, and and their thicknesses. We developed a reversible jump Markov Chain Monte Carlo algorithm that draws samples from the posterior distribution of Earth models. The solution of the inverse problem is a posterior probability distribution of the number of layers, their thicknesses and the elastic properties as a function of

  14. The Importance of Microhabitat for Biodiversity Sampling

    PubMed Central

    Mehrabi, Zia; Slade, Eleanor M.; Solis, Angel; Mann, Darren J.

    2014-01-01

    Responses to microhabitat are often neglected when ecologists sample animal indicator groups. Microhabitats may be particularly influential in non-passive biodiversity sampling methods, such as baited traps or light traps, and for certain taxonomic groups which respond to fine scale environmental variation, such as insects. Here we test the effects of microhabitat on measures of species diversity, guild structure and biomass of dung beetles, a widely used ecological indicator taxon. We demonstrate that choice of trap placement influences dung beetle functional guild structure and species diversity. We found that locally measured environmental variables were unable to fully explain trap-based differences in species diversity metrics or microhabitat specialism of functional guilds. To compare the effects of habitat degradation on biodiversity across multiple sites, sampling protocols must be standardized and scale-relevant. Our work highlights the importance of considering microhabitat scale responses of indicator taxa and designing robust sampling protocols which account for variation in microhabitats during trap placement. We suggest that this can be achieved either through standardization of microhabitat or through better efforts to record relevant environmental variables that can be incorporated into analyses to account for microhabitat effects. This is especially important when rapidly assessing the consequences of human activity on biodiversity loss and associated ecosystem function and services. PMID:25469770

  15. MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD

    SciTech Connect

    K. HANSON

    2001-02-01

    The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.

  16. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    SciTech Connect

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; Pablo, Juan J. de

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  17. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    SciTech Connect

    Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.

    2015-07-27

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  18. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields.

    PubMed

    Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate. PMID:26233107

  19. Advanced interacting sequential Monte Carlo sampling for inverse scattering

    NASA Astrophysics Data System (ADS)

    Giraud, F.; Minvielle, P.; Del Moral, P.

    2013-09-01

    The following electromagnetism (EM) inverse problem is addressed. It consists in estimating the local radioelectric properties of materials recovering an object from global EM scattering measurements, at various incidences and wave frequencies. This large scale ill-posed inverse problem is explored by an intensive exploitation of an efficient 2D Maxwell solver, distributed on high performance computing machines. Applied to a large training data set, a statistical analysis reduces the problem to a simpler probabilistic metamodel, from which Bayesian inference can be performed. Considering the radioelectric properties as a hidden dynamic stochastic process that evolves according to the frequency, it is shown how advanced Markov chain Monte Carlo methods—called sequential Monte Carlo or interacting particles—can take benefit of the structure and provide local EM property estimates.

  20. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    SciTech Connect

    Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  1. Predicting outcomes of steady-state 13C isotope tracing experiments using Monte Carlo sampling

    PubMed Central

    2012-01-01

    Background Carbon-13 (13C) analysis is a commonly used method for estimating reaction rates in biochemical networks. The choice of carbon labeling pattern is an important consideration when designing these experiments. We present a novel Monte Carlo algorithm for finding the optimal substrate input label for a particular experimental objective (flux or flux ratio). Unlike previous work, this method does not require assumption of the flux distribution beforehand. Results Using a large E. coli isotopomer model, different commercially available substrate labeling patterns were tested computationally for their ability to determine reaction fluxes. The choice of optimal labeled substrate was found to be dependent upon the desired experimental objective. Many commercially available labels are predicted to be outperformed by complex labeling patterns. Based on Monte Carlo Sampling, the dimensionality of experimental data was found to be considerably less than anticipated, suggesting that effectiveness of 13C experiments for determining reaction fluxes across a large-scale metabolic network is less than previously believed. Conclusions While 13C analysis is a useful tool in systems biology, high redundancy in measurements limits the information that can be obtained from each experiment. It is however possible to compute potential limitations before an experiment is run and predict whether, and to what degree, the rate of each reaction can be resolved. PMID:22289253

  2. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    SciTech Connect

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.

  3. Source description and sampling techniques in PEREGRINE Monte Carlo calculations of dose distributions for radiation oncology

    SciTech Connect

    Schach von Wittenau, A.E.; Cox, L.J.; Bergstrom, P.H., Jr.; Chandler, W.P.; Hartmann-Siantar, C.L.; Hornstein, S.M.

    1997-10-31

    We outline the techniques used within PEREGRINE, a 3D Monte Carlo code calculation system, to model the photon output from medical accelerators. We discuss the methods used to reduce the phase-space data to a form that is accurately and efficiently sampled.

  4. Monte Carlo calculations of the HPGe detector efficiency for radioactivity measurement of large volume environmental samples.

    PubMed

    Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim

    2015-08-01

    A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site. PMID:25982445

  5. Lévy-Ciesielski random series as a useful platform for Monte Carlo path integral sampling.

    PubMed

    Predescu, Cristian

    2005-04-01

    We demonstrate that the Lévy-Ciesielski implementation of Lie-Trotter products enjoys several properties that make it extremely suitable for path-integral Monte Carlo simulations: fast computation of paths, fast Monte Carlo sampling, and the ability to use different numbers of time slices for the different degrees of freedom, commensurate with the quantum effects. It is demonstrated that a Monte Carlo simulation for which particles or small groups of variables are updated in a sequential fashion has a statistical efficiency that is always comparable to or better than that of an all-particle or all-variable update sampler. The sequential sampler results in significant computational savings if updating a variable costs only a fraction of the cost for updating all variables simultaneously or if the variables are independent. In the Lévy-Ciesielski representation, the path variables are grouped in a small number of layers, with the variables from the same layer being statistically independent. The superior performance of the fast sampling algorithm is shown to be a consequence of these observations. Both mathematical arguments and numerical simulations are employed in order to quantify the computational advantages of the sequential sampler, the Lévy-Ciesielski implementation of path integrals, and the fast sampling algorithm. PMID:15903818

  6. On scale invariant features and sequential Monte Carlo sampling for bronchoscope tracking

    NASA Astrophysics Data System (ADS)

    Luó, Xióngbiao; Feuerstein, Marco; Kitasaka, Takayuki; Natori, Hiroshi; Takabatake, Hirotsugu; Hasegawa, Yoshinori; Mori, Kensaku

    2011-03-01

    This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial bifurcations or folds and cannot automatically resume the tracking procedure after failures, which result usually from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters according to the observation model based on scale invariant feature-based camera motion recovery. We evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method can track a bronchoscope more accurate and robust than current state-of-the-art method, particularly increasing the tracking performance by 38.7% without using an additional position sensor.

  7. Sequential Importance Sampling for Rare Event Estimation with Computer Experiments

    SciTech Connect

    Williams, Brian J.; Picard, Richard R.

    2012-06-25

    Importance sampling often drastically improves the variance of percentile and quantile estimators of rare events. We propose a sequential strategy for iterative refinement of importance distributions for sampling uncertain inputs to a computer model to estimate quantiles of model output or the probability that the model output exceeds a fixed or random threshold. A framework is introduced for updating a model surrogate to maximize its predictive capability for rare event estimation with sequential importance sampling. Examples of the proposed methodology involving materials strength and nuclear reactor applications will be presented. The conclusions are: (1) Importance sampling improves UQ of percentile and quantile estimates relative to brute force approach; (2) Benefits of importance sampling increase as percentiles become more extreme; (3) Iterative refinement improves importance distributions in relatively few iterations; (4) Surrogates are necessary for slow running codes; (5) Sequential design improves surrogate quality in region of parameter space indicated by importance distributions; and (6) Importance distributions and VRFs stabilize quickly, while quantile estimates may converge slowly.

  8. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    SciTech Connect

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  9. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  10. Extensive all-atom Monte Carlo sampling and QM/MM corrections in the SAMPL4 hydration free energy challenge.

    PubMed

    Genheden, Samuel; Cabedo Martinez, Ana I; Criddle, Michael P; Essex, Jonathan W

    2014-03-01

    We present our predictions for the SAMPL4 hydration free energy challenge. Extensive all-atom Monte Carlo simulations were employed to sample the compounds in explicit solvent. While the focus of our study was to demonstrate well-converged and reproducible free energies, we attempted to address the deficiencies in the general Amber force field force field with a simple QM/MM correction. We show that by using multiple independent simulations, including different starting configurations, and enhanced sampling with parallel tempering, we can obtain well converged hydration free energies. Additional analysis using dihedral angle distributions, torsion-root mean square deviation plots and thermodynamic cycles support this assertion. We obtain a mean absolute deviation of 1.7 kcal mol(-1) and a Kendall's τ of 0.65 compared with experiment. PMID:24488307

  11. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    NASA Astrophysics Data System (ADS)

    Holmes, Jesse Curtis

    established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  12. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  13. Optimal sampling efficiency in Monte Carlo sampling with an approximate potential

    SciTech Connect

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.

  14. Replica-exchange Wang Landau sampling: pushing the limits of Monte Carlo simulations in materials sciences

    SciTech Connect

    Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P

    2015-01-01

    We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.

  15. Adaptive importance sampling of random walks on continuous state spaces

    SciTech Connect

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  16. Improving the sampling efficiency of Monte Carlo molecular simulations: an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Leblanc, Benoit; Braunschweig, Bertrand; Toulhoat, Hervé; Lutton, Evelyne

    We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

  17. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    SciTech Connect

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur; Hagen, Mark E; Cobb, John W

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersive sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.

  18. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis

    NASA Astrophysics Data System (ADS)

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.

    2016-07-01

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  19. ``Binless Wang-Landau sampling'' - a multicanonical Monte Carlo algorithm without histograms

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Eisenbach, Markus

    Inspired by the very successful Wang-Landau (WL) sampling, we innovated a multicanonical Monte Carlo algorithm to obtain the density of states (DOS) for physical systems with continuous state variables. Unlike the original WL scheme where the DOS is obtained as a numerical array of finite resolution, our algorithm assumes an analytical form for the DOS using a well chosen basis set, with coefficients determined iteratively similar to the WL approach. To avoid undesirable artificial errors caused by the discretization of state variables, we get rid of the use of a histogram for keeping track of the number of visits to energy levels, but store the visited states directly for the fitting of coefficients. This new algorithm has the advantage of producing an analytical expression for the DOS, while the original WL sampling can be readily recovered. This research was supported by the Office of Science of the Department of Energy under Contract DE-AC05-00OR22725.

  20. Comparison of Monte Carlo simulations of cytochrome b6f with experiment using Latin hypercube sampling.

    PubMed

    Schumaker, Mark F; Kramer, David M

    2011-09-01

    We have programmed a Monte Carlo simulation of the Q-cycle model of electron transport in cytochrome b(6)f complex, an enzyme in the photosynthetic pathway that converts sunlight into biologically useful forms of chemical energy. Results were compared with published experiments of Kramer and Crofts (Biochim. Biophys. Acta 1183:72-84, 1993). Rates for the simulation were optimized by constructing large numbers of parameter sets using Latin hypercube sampling and selecting those that gave the minimum mean square deviation from experiment. Multiple copies of the simulation program were run in parallel on a Beowulf cluster. We found that Latin hypercube sampling works well as a method for approximately optimizing very noisy objective functions of 15 or 22 variables. Further, the simplified Q-cycle model can reproduce experimental results in the presence or absence of a quinone reductase (Q(i)) site inhibitor without invoking ad hoc side-reactions. PMID:21221830

  1. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  2. Source of statistical noises in the Monte Carlo sampling techniques for coherently scattered photons

    PubMed Central

    Muhammad, Wazir; Lee, Sang Hoon

    2013-01-01

    Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. ) over squared momentum transfer (). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the data tables. PMID:22984278

  3. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  4. Markov Chain Monte Carlo Sampling Methods for 1D Seismic and EM Data Inversion

    Energy Science and Technology Software Center (ESTSC)

    2008-09-22

    This software provides several Markov chain Monte Carlo sampling methods for the Bayesian model developed for inverting 1D marine seismic and controlled source electromagnetic (CSEM) data. The current software can be used for individual inversion of seismic AVO and CSEM data and for joint inversion of both seismic and EM data sets. The structure of the software is very general and flexible, and it allows users to incorporate their own forward simulation codes and rockmore » physics model codes easily into this software. Although the softwae was developed using C and C++ computer languages, the user-supplied codes can be written in C, C++, or various versions of Fortran languages. The software provides clear interfaces for users to plug in their own codes. The output of this software is in the format that the R free software CODA can directly read to build MCMC objects.« less

  5. Two-phase importance sampling for inference about transmission trees

    PubMed Central

    Numminen, Elina; Chewapreecha, Claire; Sirén, Jukka; Turner, Claudia; Turner, Paul; Bentley, Stephen D.; Corander, Jukka

    2014-01-01

    There has been growing interest in the statistics community to develop methods for inferring transmission pathways of infectious pathogens from molecular sequence data. For many datasets, the computational challenge lies in the huge dimension of the missing data. Here, we introduce an importance sampling scheme in which the transmission trees and phylogenies of pathogens are both sampled from reasonable importance distributions, alleviating the inference. Using this approach, arbitrary models of transmission could be considered, contrary to many earlier proposed methods. We illustrate the scheme by analysing transmissions of Streptococcus pneumoniae from household to household within a refugee camp, using data in which only a fraction of hosts is observed, but which is still rich enough to unravel the within-household transmission dynamics and pairs of households between whom transmission is plausible. We observe that while probability of direct transmission is low even for the most prominent cases of transmission, still those pairs of households are geographically much closer to each other than expected under random proximity. PMID:25253455

  6. Two-phase importance sampling for inference about transmission trees.

    PubMed

    Numminen, Elina; Chewapreecha, Claire; Sirén, Jukka; Turner, Claudia; Turner, Paul; Bentley, Stephen D; Corander, Jukka

    2014-11-01

    There has been growing interest in the statistics community to develop methods for inferring transmission pathways of infectious pathogens from molecular sequence data. For many datasets, the computational challenge lies in the huge dimension of the missing data. Here, we introduce an importance sampling scheme in which the transmission trees and phylogenies of pathogens are both sampled from reasonable importance distributions, alleviating the inference. Using this approach, arbitrary models of transmission could be considered, contrary to many earlier proposed methods. We illustrate the scheme by analysing transmissions of Streptococcus pneumoniae from household to household within a refugee camp, using data in which only a fraction of hosts is observed, but which is still rich enough to unravel the within-household transmission dynamics and pairs of households between whom transmission is plausible. We observe that while probability of direct transmission is low even for the most prominent cases of transmission, still those pairs of households are geographically much closer to each other than expected under random proximity. PMID:25253455

  7. Monte Carlo based investigation of Berry phase for depth resolved characterization of biomedical scattering samples

    SciTech Connect

    Baba, Justin S; John, Dwayne O; Koju, Vijay

    2015-01-01

    The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.

  8. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns

    PubMed Central

    Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim

    2015-01-01

    Motivation: Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system’s Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Results: Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. Contact: dorothee.childs@embl.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072485

  9. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  10. Using self-consistent fields to bias Monte Carlo methods with applications to designing and sampling protein sequences

    NASA Astrophysics Data System (ADS)

    Zou, Jinming; Saven, Jeffery G.

    2003-02-01

    For complex multidimensional systems, Monte Carlo methods are useful for sampling probable regions of a configuration space and, in the context of annealing, for determining "low energy" or "high scoring" configurations. Such methods have been used in protein design as means to identify amino acid sequences that are energetically compatible with a particular backbone structure. As with many other applications of Monte Carlo methods, such searches can be inefficient if trial configurations (protein sequences) in the Markov chain are chosen randomly. Here a mean-field biased Monte Carlo method (MFBMC) is presented and applied to designing and sampling protein sequences. The MFBMC method uses predetermined sequence identity probabilities wi(α) to bias the sequence selection. The wi(α) are calculated using a self-consistent, mean-field theory that can estimate the number and composition of sequences having predetermined values of energetically related foldability criteria. The MFBMC method is applied to both a simple protein model, the 27-mer lattice model, and an all-atom protein model. Compared to conventional Monte Carlo (MC) and configurational bias Monte Carlo (BMC), the MFBMC method converges faster to low energy sequences and samples such sequences more efficiently. The MFBMC method also tolerates faster cooling rates than the MC and BMC methods. The MFBMC method can be applied not only to protein sequence search, but also to a wide variety of polymeric and condensed phase systems.

  11. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  12. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  13. Understanding Mars: The Geologic Importance of Returned Samples

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.

    2011-12-01

    what are the nature, ages, and origin of the diverse suite of aqueous environments, were any of them habitable, how, when, and why did environments vary through time, and finally, did any of them host life or its precursors? A critical next step toward answering these questions would be provided through the analysis of carefully selected samples from geologically diverse and well-characterized sites that are returned to Earth for detailed study. This sample return campaign is envisioned as a sequence of three missions that collect the samples, place them into Mars orbit, and return them to Earth. Our existing scientific knowledge of Mars makes it possible to select a site at which specific, detailed hypotheses can be tested, and from which the orbital mapping can be validated and extended globally. Existing and future analysis techniques developed in laboratories around the world will provide the means to perform a wide array of tests on these samples, develop hypotheses for the origin of their chemical, isotopic, and morphologic signatures, and, most importantly, perform follow-up measurements to test and validate the findings. These analyses will dramatically improve our understanding of the geologic processes and history of Mars, and through their ties to the global geologic context, will once again revolutionize our understanding of this complex planet.

  14. 19 CFR 151.67 - Sampling by importer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...

  15. 19 CFR 151.67 - Sampling by importer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...

  16. 19 CFR 151.67 - Sampling by importer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...

  17. 19 CFR 151.67 - Sampling by importer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...

  18. 19 CFR 151.67 - Sampling by importer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...

  19. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  20. 40 CFR 80.1349 - Alternative sampling and testing requirements for importers who import gasoline into the United...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... requirements for importers who import gasoline into the United States by truck. 80.1349 Section 80.1349... FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1349 Alternative sampling and testing requirements for importers who import gasoline into the United States...

  1. Calculation of gamma-ray mass attenuation coefficients of some Egyptian soil samples using Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan

    2014-08-01

    Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.

  2. Local three-dimensional earthquake tomography by trans-dimensional Monte Carlo sampling

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, Nicola; Giacomuzzi, Genny; Malinverno, Alberto

    2015-06-01

    Local earthquake tomography is a non-linear and non-unique inverse problem that uses event arrival times to solve for the spatial distribution of elastic properties. The typical approach is to apply iterative linearization and derive a preferred solution, but such solutions are biased by a number of subjective choices: the starting model that is iteratively adjusted, the degree of regularization used to obtain a smooth solution, and the assumed noise level in the arrival time data. These subjective choices also affect the estimation of the uncertainties in the inverted parameters. The method presented here is developed in a Bayesian framework where a priori information and measurements are combined to define a posterior probability density of the parameters of interest: elastic properties in a subsurface 3-D model, hypocentre coordinates and noise level in the data. We apply a trans-dimensional Markov chain Monte Carlo algorithm that asymptotically samples the posterior distribution of the investigated parameters. This approach allows us to overcome the issues raised above. First, starting a number of sampling chains from random samples of the prior probability distribution lessens the dependence of the solution from the starting point. Secondly, the number of elastic parameters in the 3-D subsurface model is one of the unknowns in the inversion, and the parsimony of Bayesian inference ensures that the degree of detail in the solution is controlled by the information in the data, given realistic assumptions for the error statistics. Finally, the noise level in the data, which controls the uncertainties of the solution, is also one of the inverted parameters, providing a first-order estimate of the data errors. We apply our method to both synthetic and field arrival time data. The synthetic data inversion successfully recovers velocity anomalies, hypocentre coordinates and the level of noise in the data. The Bayesian inversion of field measurements gives results

  3. Trans-dimensional Monte Carlo sampling applied to the magnetotelluric inverse problem

    NASA Astrophysics Data System (ADS)

    Mandolesi, Eric; Piana Agostinetti, Nicola

    2015-01-01

    The data required to build geological models of the subsurface are often unavailable from direct measurements or well logs. In order to image the subsurface geological structures several geophysical methods have been developed. The magnetotelluric (MT) method uses natural, time-varying electromagnetic (EM) fields as its source to measure the EM impedance of the subsurface. The interpretation of these data is routinely undertaken by solving inverse problems to produce 1D, 2D or 3D electrical conductivity models of the subsurface. In classical MT inverse problems the investigated models are parametrized using a fixed number of unknowns (i.e. fixed number of layers in a 1D model, or a fixed number of cells in a 2D model), and the non-uniqueness of the solution is handled by a regularization term added to the objective function. This study presents a different approach to the 1D MT inverse problem, by using a trans-dimensional Monte Carlo sampling algorithm, where trans-dimensionality implies that the number of unknown parameters is a parameter itself. This construction has been shown to have a built-in Occam razor, so that the regularization term is not required to produce a simple model. The influences of subjective choices in the interpretation process can therefore be sensibly reduced. The inverse problem is solved within a Bayesian framework, where posterior probability distribution of the investigated parameters are sought, rather than a single best-fit model, and uncertainties on the model parameters, and their correlation, can be easily measured.

  4. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023

  5. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies

    NASA Astrophysics Data System (ADS)

    Mielke, Steven L.; Truhlar, Donald G.

    2016-01-01

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  6. Stretching semiflexible polymer chains: evidence for the importance of excluded volume effects from Monte Carlo simulation.

    PubMed

    Hsu, Hsiao-Ping; Binder, Kurt

    2012-01-14

    Semiflexible macromolecules in dilute solution under very good solvent conditions are modeled by self-avoiding walks on the simple cubic lattice (d = 3 dimensions) and square lattice (d = 2 dimensions), varying chain stiffness by an energy penalty ε(b) for chain bending. In the absence of excluded volume interactions, the persistence length l(p) of the polymers would then simply be l(p) = l(b)(2d - 2)(-1)q(b) (-1) with q(b) = exp(-ε(b)/k(B)T), the bond length l(b) being the lattice spacing, and k(B)T is the thermal energy. Using Monte Carlo simulations applying the pruned-enriched Rosenbluth method (PERM), both q(b) and the chain length N are varied over a wide range (0.005 ≤ q(b) ≤ 1, N ≤ 50,000), and also a stretching force f is applied to one chain end (fixing the other end at the origin). In the absence of this force, in d = 2 a single crossover from rod-like behavior (for contour lengths less than l(p)) to swollen coils occurs, invalidating the Kratky-Porod model, while in d = 3 a double crossover occurs, from rods to Gaussian coils (as implied by the Kratky-Porod model) and then to coils that are swollen due to the excluded volume interaction. If the stretching force is applied, excluded volume interactions matter for the force versus extension relation irrespective of chain stiffness in d = 2, while theories based on the Kratky-Porod model are found to work in d = 3 for stiff chains in an intermediate regime of chain extensions. While for q(b) ≪ 1 in this model a persistence length can be estimated from the initial decay of bond-orientational correlations, it is argued that this is not possible for more complex wormlike chains (e.g., bottle-brush polymers). Consequences for the proper interpretation of experiments are briefly discussed. PMID:22260610

  7. Optimized Nested Markov Chain Monte Carlo Sampling: Application to the Liquid Nitrogen Hugoniot Using Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.

    2009-06-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The ``full'' system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The ``reference'' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  8. Optimized Nested Markov Chain Monte Carlo Sampling: Application to the Liquid Nitrogen Hugoniot Using Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.

    2009-12-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The "full" system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The "reference" system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  9. Optimized nested Markov chain Monte Carlo sampling: application to the liquid nitrogen Hugoniot using density functional theory

    SciTech Connect

    Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D

    2009-01-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  10. 40 CFR 80.1630 - Sampling and testing requirements for refiners, gasoline importers and producers and importers of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... refiners, gasoline importers and producers and importers of certified ethanol denaturant. 80.1630 Section... refiners, gasoline importers and producers and importers of certified ethanol denaturant. (a) Sample and test each batch of gasoline and certified ethanol denaturant. (1) Refiners and importers shall...

  11. Monte Carlo and Molecular Dynamics in the Multicanonical Ensemble: Connections between Wang-Landau Sampling and Metadynamics

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Perez, Danny; Junghans, Christoph

    2014-03-01

    We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.

  12. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  13. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  14. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  15. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    SciTech Connect

    Sampson, Andrew; Le Yi; Williamson, Jeffrey F.

    2012-02-15

    Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and

  16. Evaluation of sampling plans for in-service inspection of steam generator tubes. Volume 2, Comprehensive analytical and Monte Carlo simulation results for several sampling plans

    SciTech Connect

    Kurtz, R.J.; Heasler, P.G.; Baird, D.B.

    1994-02-01

    This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.

  17. Determination of gamma-ray self-attenuation correction in environmental samples by combining transmission measurements and Monte Carlo simulations.

    PubMed

    Šoštarić, Marko; Babić, Dinko; Petrinec, Branko; Zgorelec, Željka

    2016-07-01

    We develop a simple and widely applicable method for determining the self-attenuation correction in gamma-ray spectrometry on environmental samples. The method relies on measurements of the transmission of photons over the matrices of a calibration standard and an analysed sample. Results of this experiment are used in subsequent Monte Carlo simulations in which we first determine the linear attenuation coefficients (μ) of the two matrices and then the self-attenuation correction for the analysed sample. The method is validated by reproducing, over a wide energy range, the literature data for the μ of water. We demonstrate the use of the method on a sample of sand, for which we find that the correction is considerable below ~400keV, where many naturally occurring radionuclides emit gamma rays. At the lowest inspected energy (~60keV), one measures an activity that is by a factor of ~1.8 smaller than its true value. PMID:27157125

  18. Monte Carlo simulation of the self-absorption corrections for natural samples in gamma-ray spectrometry.

    PubMed

    Vargas, M Jurado; Timón, A Fernández; Díaz, N Cornejo; Sánchez, D Pérez

    2002-12-01

    Gamma-ray self-attenuation corrections in the energy range 60-2000 keV were evaluated by means of Monte Carlo calculations for environmental samples in a cylindrical measuring geometry. The dependence of the full-energy peak efficiency on the sample density was obtained for some particular photon energies and, as a result, the corresponding self-attenuation correction factors were obtained. The calculations were performed by assuming that natural materials have mass attenuation coefficients very similar to those of water in the energy range studied. Three different HpGe coaxial detectors were considered: an n-type detector with 44.3% relative efficiency and two p-type detectors of relative efficiencies 20.0% and 30.5%. Our calculations were in very good agreement with the self-attenuation correction factors obtained experimentally by other workers for environmental samples of different densities. This work demonstrates the reliability of Monte Carlo calculations for correcting photon self-attenuation in natural samples. The results also show that the corresponding correction factors are essentially unaffected by the specific coaxial detector used. PMID:12406634

  19. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  20. Gamma spectrometry efficiency calibration using Monte Carlo methods to measure radioactivity of 137Cs in food samples.

    PubMed

    Alrefae, T

    2014-12-01

    A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method. PMID:24214912

  1. Fast Protein Loop Sampling and Structure Prediction Using Distance-Guided Sequential Chain-Growth Monte Carlo Method

    PubMed Central

    Tang, Ke; Zhang, Jinfeng; Liang, Jie

    2014-01-01

    Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues). PMID:24763317

  2. Symmetry relationships for multiple scattering of polarized light in turbid spherical samples: theory and a Monte Carlo simulation.

    PubMed

    Otsuki, Soichi

    2016-02-01

    This paper presents a theory describing totally incoherent multiple scattering of turbid spherical samples. It is proved that if reciprocity and mirror symmetry hold for single scattering by a particle, they also hold for multiple scattering in spherical samples. Monte Carlo simulations generate a reduced effective scattering Mueller matrix, which virtually satisfies reciprocity and mirror symmetry. The scattering matrix was factorized by using the symmetric decomposition in a predefined form, as well as the Lu-Chipman polar decomposition, approximately into a product of a pure depolarizer and vertically oriented linear retarding diattenuators. The parameters of these components were calculated as a function of the polar angle. While the turbid spherical sample is a pure depolarizer at low polar angles, it obtains more functions of the retarding diattenuator with increasing polar angle. PMID:26831777

  3. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  4. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta.

    PubMed

    Zhang, Zhe; Schindler, Christina E M; Lange, Oliver F; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  5. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  6. Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-10-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. PMID:23773521

  7. Improved Inference in Bayesian Segmentation Using Monte Carlo Sampling: Application to Hippocampal Subfield Volumetry

    PubMed Central

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van

    2013-01-01

    Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521

  8. Sampling Plant Diversity and Rarity at Landscape Scales: Importance of Sampling Time in Species Detectability

    PubMed Central

    Zhang, Jian; Nielsen, Scott E.; Grainger, Tess N.; Kohler, Monica; Chipchar, Tim; Farr, Daniel R.

    2014-01-01

    Documenting and estimating species richness at regional or landscape scales has been a major emphasis for conservation efforts, as well as for the development and testing of evolutionary and ecological theory. Rarely, however, are sampling efforts assessed on how they affect detection and estimates of species richness and rarity. In this study, vascular plant richness was sampled in 356 quarter hectare time-unlimited survey plots in the boreal region of northeast Alberta. These surveys consisted of 15,856 observations of 499 vascular plant species (97 considered to be regionally rare) collected by 12 observers over a 2 year period. Average survey time for each quarter-hectare plot was 82 minutes, ranging from 20 to 194 minutes, with a positive relationship between total survey time and total plant richness. When survey time was limited to a 20-minute search, as in other Alberta biodiversity methods, 61 species were missed. Extending the survey time to 60 minutes, reduced the number of missed species to 20, while a 90-minute cut-off time resulted in the loss of 8 species. When surveys were separated by habitat type, 60 minutes of search effort sampled nearly 90% of total observed richness for all habitats. Relative to rare species, time-unlimited surveys had ∼65% higher rare plant detections post-20 minutes than during the first 20 minutes of the survey. Although exhaustive sampling was attempted, observer bias was noted among observers when a subsample of plots was re-surveyed by different observers. Our findings suggest that sampling time, combined with sample size and observer effects, should be considered in landscape-scale plant biodiversity surveys. PMID:24740179

  9. Sample Size Determination for Regression Models Using Monte Carlo Methods in R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2014-01-01

    A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…

  10. [Protozoans in superficial waters and faecal samples of individuals of rural populations of the Montes municipality, Sucre state, Venezuela].

    PubMed

    Mora, Leonor; Martínez, Indira; Figuera, Lourdes; Segura, Merlyn; Del Valle, Guilarte

    2010-12-01

    In Sucre state, the Manzanares river is threatened by domestic, agricultural and industrial activities, becoming an environmental risk factor for its inhabitants. In this sense, the presence of protozoans in superficial waters of tributaries of the Manzanares river (Orinoco river, Quebrada Seca, San Juan river), Montes municipality, Sucre state, as well as the analysis of faecal samples from inhabitants of towns bordering these tributaries were evaluated. We collected faecal and water samples from may 2006 through april 2007. The superficial water samples were processed after centrifugation by the direct examination and floculation, using lugol, modified Kinyoun and trichromic colorations. Fecal samples where analyzed by direct examination with physiological saline solution and the modified Ritchie concentration method and using the other colorations techniques above mentioned. The most frequently observed protozoans in superficial waters in the three tributaries were: Amoebas, Blastocystis sp, Endolimax sp., Chilomastix sp. and Giardia sp. Whereas in faecal samples, Blastocystis hominis, Endolimax nana and Entaomeba coli had the greatest frequencies in the three communities. The inhabitants of Orinoco La Peña turned out to be most susceptible to these parasitic infections (77.60%), followed by San Juan River (46.63%) and Quebrada Seca (39.49%). The presence of pathogenic and nonpathogenic protozoans in superficial waters demonstrates the faecal contamination of the tributaries, representing a constant focus of infection for their inhabitants, inferred by the observation of the same species in both types of samples. PMID:21365874

  11. Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2008-01-01

    The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…

  12. Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models

    PubMed Central

    Drugowitsch, Jan

    2016-01-01

    We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391

  13. Exhaustive Metropolis Monte Carlo sampling and analysis of polyalanine conformations adopted under the influence of hydrogen bonds.

    PubMed

    Podtelezhnikov, Alexei A; Wild, David L

    2005-10-01

    We propose a novel Metropolis Monte Carlo procedure for protein modeling and analyze the influence of hydrogen bonding on the distribution of polyalanine conformations. We use an atomistic model of the polyalanine chain with rigid and planar polypeptide bonds, and elastic alpha carbon valence geometry. We adopt a simplified energy function in which only hard-sphere repulsion and hydrogen bonding interactions between the atoms are considered. Our Metropolis Monte Carlo procedure utilizes local crankshaft moves and is combined with parallel tempering to exhaustively sample the conformations of 16-mer polyalanine. We confirm that Flory's isolated-pair hypothesis (the steric independence between the dihedral angles of individual amino acids) does not hold true in long polypeptide chains. In addition to 3(10)- and alpha-helices, we identify a kink stabilized by 2 hydrogen bonds with a shared acceptor as a common structural motif. Varying the strength of hydrogen bonds, we induce the helix-coil transition in the model polypeptide chain. We compare the propensities for various hydrogen bonding patterns and determine the degree of cooperativity of hydrogen bond formation in terms of the Hill coefficient. The observed helix-coil transition is also quantified according to Zimm-Bragg theory. PMID:16049911

  14. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    NASA Astrophysics Data System (ADS)

    Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.

    2015-08-01

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.

  15. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples.

    PubMed

    Furuta, T; Maeyama, T; Ishikawa, K L; Fukunishi, N; Fukasaku, K; Takagi, S; Noda, S; Himeno, R; Hayashi, S

    2015-08-21

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning. PMID:26266894

  16. A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang Landau sampling

    SciTech Connect

    Li, Ying Wai; Vogel, Thomas; Wuest, Thomas; Landau, David P

    2014-01-01

    We introduce a generic, parallel Wang Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.

  17. Multimodal nested sampling: an efficient and robust alternative to Markov Chain Monte Carlo methods for astronomical data analyses

    NASA Astrophysics Data System (ADS)

    Feroz, F.; Hobson, M. P.

    2008-02-01

    In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation

  18. Limnological and ecological methods: approaches, and sampling strategies for middle Xingu River in the area of influence of future Belo Monte Power Plant.

    PubMed

    Tundisi, J G; Matsumura-Tundisi, T; Tundisi, J E M; Faria, C R L; Abe, D S; Blanco, F; Rodrigues Filho, J; Campanelli, L; Sidagis Galli, C; Teixeira-Silva, V; Degani, R; Soares, F S; Gatti Junior, P

    2015-08-01

    In this paper the authors describe the limnological approaches, the sampling methodology, and strategy adopted in the study of the Xingu River in the area of influence of future Belo Monte Power Plant. The river ecosystems are characterized by unidirectional current, highly variable in time depending on the climatic situation the drainage pattern an hydrological cycle. Continuous vertical mixing with currents and turbulence, are characteristic of these ecosystems. All these basic mechanisms were taken into consideration in the sampling strategy and field work carried out in the Xingu River Basin, upstream and downstream the future Belo Monte Power Plant Units. PMID:26691072

  19. Review of Sample Size for Structural Equation Models in Second Language Testing and Learning Research: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    In'nami, Yo; Koizumi, Rie

    2013-01-01

    The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…

  20. Mammographic Imaging Studies Using the Monte Carlo Image Simulation-Differential Sampling (MCMIS-DS) Code

    SciTech Connect

    Kuruvilla Verghese

    2002-04-05

    This report summarizes the highlights of the research performed under the 1-year NEER grant from the Department of Energy. The primary goal of this study was to investigate the effects of certain design changes in the Fisher Senoscan mammography system and in the degree of breast compression on the discernability of microcalcifications in calcification clusters often observed in mammograms with tumor lesions. The most important design change that one can contemplate in a digital mammography system to improve resolution of calcifications is the reduction of pixel dimensions of the digital detector. Breast compression is painful to the patient and is though to be a deterrent to women to get routine mammographic screening. Calcification clusters often serve as markers (indicators ) of breast cancer.

  1. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  2. Monte-Carlo simulation of nano-collected current from a silicon sample containing a linear arrangement of uncapped nanocrystals

    SciTech Connect

    Ledra, Mohammed; El Hdiy, Abdelillah

    2015-09-21

    A Monte-Carlo simulation algorithm is used to study electron beam induced current in an intrinsic silicon sample, which contains at its surface a linear arrangement of uncapped nanocrystals positioned in the irradiation trajectory around the hemispherical collecting nano-contact. The induced current is generated by the use of electron beam energy of 5 keV in a perpendicular configuration. Each nanocrystal is considered as a recombination center, and the surface recombination velocity at the free surface is taken to be zero. It is shown that the induced current is affected by the distance separating each nanocrystal from the nano-contact. An increase of this separation distance translates to a decrease of the nanocrystals density and an increase of the minority carrier diffusion length. The results reveal a threshold separation distance from which nanocrystals have no more effect on the collection efficiency, and the diffusion length reaches the value obtained in the absence of nanocrystals. A cross-section characterizing the nano-contact ability to trap carriers was determined.

  3. Improved measurement scheme of the self energy in the worm-sampled hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon

    Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)

  4. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What are the sampling and testing requirements for refiners and importers? 80.330 Section 80.330 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and...

  5. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the sampling and testing requirements for refiners and importers? 80.330 Section 80.330 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and...

  6. Sampling High-Altitude and Stratified Mating Flights of Red Imported Fire Ant

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens ...

  7. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  8. Coalescent: an open-science framework for importance sampling in coalescent theory

    PubMed Central

    Spouge, John L.

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only

  9. Detection and cultivation of indigenous microorganisms in Mesozoic claystone core samples from the Opalinus Clay Formation (Mont Terri Rock Laboratory)

    NASA Astrophysics Data System (ADS)

    Mauclaire, L.; McKenzie, J. A.; Schwyn, B.; Bossart, P.

    Although microorganisms have been isolated from various deep-subsurface environments, the persistence of microbial activity in claystones buried to great depths and on geological time scales has been poorly studied. The presence of in-situ microbial life in the Opalinus Clay Formation (Mesozoic claystone, 170 million years old) at the Mont Terri Rock Laboratory, Canton Jura, Switzerland was investigated. Opalinus Clay is a host rock candidate for a radioactive waste repository. Particle tracer tests demonstrated the uncontaminated nature of the cored samples, showing their suitability for microbiological investigations. To determine whether microorganisms are a consistent and characteristic component of the Opalinus Clay Formation, two approaches were used: (i) the cultivation of indigenous micoorganisms focusing mainly on the cultivation of sulfate-reducing bacteria, and (ii) the direct detection of molecular biomarkers of bacteria. The goal of the first set of experiments was to assess the presence of cultivable microorganisms within the Opalinus Clay Formation. After few months of incubation, the number of cell ranged from 0.1 to 2 × 10 3 cells ml -1 media. The microorganisms were actively growing as confirmed by the observation of dividing cells, and detection of traces of sulfide. To avoid cultivation bias, quantification of molecular biomarkers (phospholipid fatty acids) was used to assess the presence of autochthonous microorganisms. These molecules are good indicators of the presence of living cells. The Opalinus Clay contained on average 64 ng of PLFA g -1 dry claystone. The detected microbial community comprises mainly Gram-negative anaerobic bacteria as indicated by the ratio of iso/anteiso phospholipids (about 2) and the detection of large amount of β-hydroxy substituted fatty acids. The PLFA composition reveals the presence of specific functional groups of microorganisms in particular sulfate-reducing bacteria ( Desulfovibrio, Desulfobulbus, and

  10. The Five Planets in the Kepler-296 Binary System All Orbit the Primary: An Application of Importance Sampling

    NASA Astrophysics Data System (ADS)

    Barclay, Thomas; Quintana, Elisa; Adams, Fred; Ciardi, David; Huber, Daniel; Foreman-Mackey, Daniel; Montet, Benjamin Tyler; Caldwell, Douglas

    2015-08-01

    Kepler-296 is a binary star system with two M-dwarf components separated by 0.2 arcsec. Five transiting planets have been confirmed to be associated with the Kepler-296 system; given the evidence to date, however, the planets could in principle orbit either star. This ambiguity has made it difficult to constrain both the orbital and physical properties of the planets. Using both statistical and analytical arguments, this paper shows that all five planets are highly likely to orbit the primary star in this system. We performed a Markov-Chain Monte Carlo simulation using a five transiting planet model, leaving the stellar density and dilution with uniform priors. Using importance sampling, we compared the model probabilities under the priors of the planets orbiting either the brighter or the fainter component of the binary. A model where the planets orbit the brighter component, Kepler-296A, is strongly preferred by the data. Combined with our assertion that all five planets orbit the same star, the two outer planets in the system, Kepler-296 Ae and Kepler-296 Af, have radii of 1.53 ± 0.26 and 1.80 ± 0.31 R⊕, respectively, and receive incident stellar fluxes of 1.40 ± 0.23 and 0.62 ± 0.10 times the incident flux the Earth receives from the Sun. This level of irradiation places both planets within or close to the circumstellar habitable zone of their parent star.

  11. Importance sampling for Lambda-coalescents in the infinitely many sites model.

    PubMed

    Birkner, Matthias; Blath, Jochen; Steinrücken, Matthias

    2011-06-01

    We present and discuss new importance sampling schemes for the approximate computation of the sample probability of observed genetic types in the infinitely many sites model from population genetics. More specifically, we extend the 'classical framework', where genealogies are assumed to be governed by Kingman's coalescent, to the more general class of Lambda-coalescents and develop further Hobolth et al.'s (2008) idea of deriving importance sampling schemes based on 'compressed genetrees'. The resulting schemes extend earlier work by Griffiths and Tavaré (1994), Stephens and Donnelly (2000), Birkner and Blath (2008) and Hobolth et al. (2008). We conclude with a performance comparison of classical and new schemes for Beta- and Kingman coalescents. PMID:21296095

  12. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  13. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  14. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGESBeta

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  15. Importance of sample form and surface temperature for analysis by ambient plasma mass spectrometry (PADI).

    PubMed

    Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S

    2014-09-16

    Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders. PMID:25137443

  16. A laser microdissection-based workflow for FFPE tissue microproteomics: Important considerations for small sample processing.

    PubMed

    Longuespée, Rémi; Alberts, Deborah; Pottier, Charles; Smargiasso, Nicolas; Mazzucchelli, Gabriel; Baiwir, Dominique; Kriegsmann, Mark; Herfs, Michael; Kriegsmann, Jörg; Delvenne, Philippe; De Pauw, Edwin

    2016-07-15

    Proteomic methods are today widely applied to formalin-fixed paraffin-embedded (FFPE) tissue samples for several applications in research, especially in molecular pathology. To date, there is an unmet need for the analysis of small tissue samples, such as for early cancerous lesions. Indeed, no method has yet been proposed for the reproducible processing of small FFPE tissue samples to allow biomarker discovery. In this work, we tested several procedures to process laser microdissected tissue pieces bearing less than 3000 cells. Combined with appropriate settings for liquid chromatography mass spectrometry-mass spectrometry (LC-MS/MS) analysis, a citric acid antigen retrieval (CAAR)-based procedure was established, allowing to identify more than 1400 proteins from a single microdissected breast cancer tissue biopsy. This work demonstrates important considerations concerning the handling and processing of laser microdissected tissue samples of extremely limited size, in the process opening new perspectives in molecular pathology. A proof of the proposed method for biomarker discovery, with respect to these specific handling considerations, is illustrated using the differential proteomic analysis of invasive breast carcinoma of no special type and invasive lobular triple-negative breast cancer tissues. This work will be of utmost importance for early biomarker discovery or in support of matrix-assisted laser desorption/ionization (MALDI) imaging for microproteomics from small regions of interest. PMID:26690073

  17. The Importance of Meteorite Collections to Sample Return Missions: Past, Present, and Future Considerations

    NASA Technical Reports Server (NTRS)

    Welzenbach, L. C.; McCoy, T. J.; Glavin, D. P.; Dworkin, J. P.; Abell, P. A.

    2012-01-01

    turn led to a new wave of Mars exploration that ultimately could lead to sample return focused on evidence for past or present life. This partnership between collections and missions will be increasingly important in the coming decades as we discover new questions to be addressed and identify targets for for both robotic and human exploration . Nowhere is this more true than in the ultimate search for the abiotic and biotic processes that produced life. Existing collections also provide the essential materials for developing and testing new analytical schemes to detect the rare markers of life and distinguish them from abiotic processes. Large collections of meteorites and the new types being identified within these collections, which come to us at a fraction of the cost of a sample return mission, will continue to shape the objectives of future missions and provide new ways of interpreting returned samples.

  18. Importance of elastic scattering to particle direction determination in Monte Carlo calculations of DT reactions in flight

    SciTech Connect

    Devaney, J.J.

    1982-04-01

    The importance of single, large-angle, nuclear-coulombic, nuclear-hadronic, hadronic-coulombic interference, and multiple nuclear-coulombic scattering is investigated for tritons incident on deuterium, iron, and plutonium for very high temperatures and densities and for ordinary liquid and solid densities at low temperature. Depending on the accuracy desired, we conclude that for 10-keV-temperature DT plasmas it is not necessary to include elastic scattering deflection in reaction-in-flight calculations. For higher temperatures or where angular accuracies greater than 10/sup 0/ are significant or for higher Z targets or for other special circumstances, one must include elastic scattering from coulomb forces.

  19. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  20. Importance of anthropogenic climate impact, sampling error and urban development in sewer system design.

    PubMed

    Egger, C; Maurer, M

    2015-04-15

    Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. PMID:25644630

  1. A new method for estimating the demographic history from DNA sequences: an importance sampling approach

    PubMed Central

    Ait Kaci Azzou, Sadoune; Larribe, Fabrice; Froda, Sorana

    2015-01-01

    The effective population size over time (demographic history) can be retraced from a sample of contemporary DNA sequences. In this paper, we propose a novel methodology based on importance sampling (IS) for exploring such demographic histories. Our starting point is the generalized skyline plot with the main difference being that our procedure, skywis plot, uses a large number of genealogies. The information provided by these genealogies is combined according to the IS weights. Thus, we compute a weighted average of the effective population sizes on specific time intervals (epochs), where the genealogies that agree more with the data are given more weight. We illustrate by a simulation study that the skywis plot correctly reconstructs the recent demographic history under the scenarios most commonly considered in the literature. In particular, our method can capture a change point in the effective population size, and its overall performance is comparable with the one of the bayesian skyline plot. We also introduce the case of serially sampled sequences and illustrate that it is possible to improve the performance of the skywis plot in the case of an exponential expansion of the effective population size. PMID:26300910

  2. Molecular characterization of Salmonella enterica serovar Saintpaul isolated from imported seafood, pepper, environmental and clinical samples.

    PubMed

    Akiyama, Tatsuya; Khan, Ashraf A; Cheng, Chorng-Ming; Stefanova, Rossina

    2011-09-01

    A total of 39 Salmonella enterica serovar Saintpaul strains from imported seafood, pepper and from environmental and clinical samples were analyzed for the presence of virulence genes, antibiotic resistance, plasmid and plasmid replicon types. Pulsed-field gel electrophoresis (PFGE) fingerprinting using the XbaI restriction enzyme and plasmid profiling were performed to assess genetic diversity. None of the isolates showed resistance to ampicillin, chloramphenicol, gentamicin, kanamycin, streptomycin, sulfisoxazole, and tetracycline. Seventeen virulence genes were screened for by PCR. All strains were positive for 14 genes (spiA, sifA, invA, spaN, sopE, sipB, iroN, msgA, pagC, orgA, prgH, lpfC, sitC, and tolC) and negative for three genes (spvB, pefA, and cdtB). Twelve strains, including six from clinical samples and six from seafood, carried one or more plasmids. Large plasmids, sized greater than 50 kb were detected in one clinical and three food isolates. One plasmid was able to be typed as IncI1 by PCR-based replicon typing. There were 25 distinct PFGE-XbaI patterns, clustered to two groups. Cluster A, with 68.5% similarity mainly consists of clinical isolates, while Cluster C, with 67.6% similarity, mainly consisted of shrimp isolates from India. Our findings indicated the genetic diversity of S. Saintpaul in clinical samples, imported seafood, and the environment and that this serotype possesses several virulent genes and plasmids which can cause salmonellosis. PMID:21645810

  3. Monte Carlo fundamentals

    SciTech Connect

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  4. Model reduction algorithms for optimal control and importance sampling of diffusions

    NASA Astrophysics Data System (ADS)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  5. Simulative Investigation on Spectral Efficiency of Unipolar Codes based OCDMA System using Importance Sampling Technique

    NASA Astrophysics Data System (ADS)

    Farhat, A.; Menif, M.; Rezig, H.

    2013-09-01

    This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.

  6. The jigsaw puzzle of sequence phenotype inference: Piecing together Shannon entropy, importance sampling, and Empirical Bayes.

    PubMed

    Shreif, Zeina; Striegel, Deborah A; Periwal, Vipul

    2015-09-01

    A nucleotide sequence 35 base pairs long can take 1,180,591,620,717,411,303,424 possible values. An example of systems biology datasets, protein binding microarrays, contain activity data from about 40,000 such sequences. The discrepancy between the number of possible configurations and the available activities is enormous. Thus, albeit that systems biology datasets are large in absolute terms, they oftentimes require methods developed for rare events due to the combinatorial increase in the number of possible configurations of biological systems. A plethora of techniques for handling large datasets, such as Empirical Bayes, or rare events, such as importance sampling, have been developed in the literature, but these cannot always be simultaneously utilized. Here we introduce a principled approach to Empirical Bayes based on importance sampling, information theory, and theoretical physics in the general context of sequence phenotype model induction. We present the analytical calculations that underlie our approach. We demonstrate the computational efficiency of the approach on concrete examples, and demonstrate its efficacy by applying the theory to publicly available protein binding microarray transcription factor datasets and to data on synthetic cAMP-regulated enhancer sequences. As further demonstrations, we find transcription factor binding motifs, predict the activity of new sequences and extract the locations of transcription factor binding sites. In summary, we present a novel method that is efficient (requiring minimal computational time and reasonable amounts of memory), has high predictive power that is comparable with that of models with hundreds of parameters, and has a limited number of optimized parameters, proportional to the sequence length. PMID:26092377

  7. Implementation and testing of the on-the-fly thermal scattering Monte Carlo sampling method for graphite and light water in MCNP6

    DOE PAGESBeta

    Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.

    2016-01-23

    Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less

  8. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...

  9. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...

  10. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    SciTech Connect

    Baba, Justin S; Koju, Vijay; John, Dwayne O

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.

  11. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... include with the retained sample the test result for benzene as conducted pursuant to § 80.46(e). (b... sample the test result for benzene as conducted pursuant to § 80.47....

  12. DS86 neutron dose: Monte Carlo analysis for depth profile of 152Eu activity in a large stone sample.

    PubMed

    Endo, S; Iwatani, K; Oka, T; Hoshi, M; Shizuma, K; Imanaka, T; Takada, J; Fujita, S; Hasai, H

    1999-06-01

    The depth profile of 152Eu activity induced in a large granite stone pillar by Hiroshima atomic bomb neutrons was calculated by a Monte Carlo N-Particle Transport Code (MCNP). The pillar was on the Motoyasu Bridge, located at a distance of 132 m (WSW) from the hypocenter. It was a square column with a horizontal sectional size of 82.5 cm x 82.5 cm and height of 179 cm. Twenty-one cells from the north to south surface at the central height of the column were specified for the calculation and 152Eu activities for each cell were calculated. The incident neutron spectrum was assumed to be the angular fluence data of the Dosimetry System 1986 (DS86). The angular dependence of the spectrum was taken into account by dividing the whole solid angle into twenty-six directions. The calculated depth profile of specific activity did not agree with the measured profile. A discrepancy was found in the absolute values at each depth with a mean multiplication factor of 0.58 and also in the shape of the relative profile. The results indicated that a reassessment of the neutron energy spectrum in DS86 is required for correct dose estimation. PMID:10494148

  13. Importance of local knowledge in plant resources management and conservation in two protected areas from Trás-os-Montes, Portugal

    PubMed Central

    2011-01-01

    Many European protected areas were legally created to preserve and maintain biological diversity, unique natural features and associated cultural heritage. Built over centuries as a result of geographical and historical factors interacting with human activity, these territories are reservoirs of resources, practices and knowledge that have been the essential basis of their creation. Under social and economical transformations several components of such areas tend to be affected and their protection status endangered. Carrying out ethnobotanical surveys and extensive field work using anthropological methodologies, particularly with key-informants, we report changes observed and perceived in two natural parks in Trás-os-Montes, Portugal, that affect local plant-use systems and consequently local knowledge. By means of informants' testimonies and of our own observation and experience we discuss the importance of local knowledge and of local communities' participation to protected areas design, management and maintenance. We confirm that local knowledge provides new insights and opportunities for sustainable and multipurpose use of resources and offers contemporary strategies for preserving cultural and ecological diversity, which are the main purposes and challenges of protected areas. To be successful it is absolutely necessary to make people active participants, not simply integrate and validate their knowledge and expertise. Local knowledge is also an interesting tool for educational and promotional programs. PMID:22112242

  14. Multihistogram reweighting for nonequilibrium Markov processes using sequential importance sampling methods.

    PubMed

    Bojesen, Troels Arnfred

    2013-04-01

    We present a multihistogram reweighting technique for nonequilibrium Markov chains with discrete energies. The method generalizes the single-histogram method of Yin et al. [Phys. Rev. E 72, 036122 (2005)], making it possible to calculate the time evolution of observables at a posteriori chosen couplings based on a set of simulations performed at other couplings. In the same way as multihistogram reweighting in an equilibrium setting improves the practical reweighting range as well as use of available data compared to single-histogram reweighting, the method generalizes the multihistogram advantages to nonequilibrium simulations. We demonstrate the procedure for the Ising model with Metropolis dynamics, but stress that the method is generally applicable to a range of models and Monte Carlo update schemes. PMID:23679555

  15. 40 CFR 80.1644 - Sampling and testing requirements for producers and importers of certified ethanol denaturant.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... producers and importers of certified ethanol denaturant. 80.1644 Section 80.1644 Protection of Environment... ethanol denaturant. (a) Sample and test each batch of certified ethanol denaturant. (1) Producers and importers of certified ethanol denaturant shall collect a representative sample from each batch of...

  16. GeoLab Concept: The Importance of Sample Selection During Long Duration Human Exploration Mission

    NASA Technical Reports Server (NTRS)

    Calaway, M. J.; Evans, C. A.; Bell, M. S.; Graff, T. G.

    2011-01-01

    In the future when humans explore planetary surfaces on the Moon, Mars, and asteroids or beyond, the return of geologic samples to Earth will be a high priority for human spaceflight operations. All future sample return missions will have strict down-mass and volume requirements; methods for in-situ sample assessment and prioritization will be critical for selecting the best samples for return-to-Earth.

  17. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention...

  18. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention...

  19. Analysis of Host–Parasite Incongruence in Papillomavirus Evolution Using Importance Sampling

    PubMed Central

    Shah, Seena D.; Doorbar, John; Goldstein, Richard A.

    2010-01-01

    The papillomaviruses (PVs) are a family of viruses infecting several mammalian and nonmammalian species that cause cervical cancer in humans. The evolutionary history of the PVs as it associated with a wide range of host species is not well understood. Incongruities between the phylogenetic trees of various viral genes as well as between these genes and the host phylogenies suggest historical viral recombination as well as violations of strict virus–host cospeciation. The extent of recombination events among PVs is uncertain, however, and there is little evidence to support a theory of PV spread via recent host transfers. We have investigated incongruence between PV genes and hence, the possibility of recombination, using Bayesian phylogenetic methods. We find significant evidence for phylogenetic incongruence among the six PV genes E1, E2, E6, E7, L1, and L2, indicating substantial recombination. Analysis of E1 and L1 phylogenies suggests ancestral recombination events. We also describe a new method for examining alternative host–parasite association mechanisms by applying importance sampling to Bayesian divergence time estimation. This new approach is not restricted by a fixed viral tree topology or knowledge of viral divergence times, multiple parasite taxa per host may be included, and it can distinguish between prior divergence of the virus before host speciation and host transfer of the virus following speciation. Using this method, we find prior divergence of PV lineages associated with the ancestral mammalian host resulting in at least 6 PV lineages prior to speciation of this host. These PV lineages have then followed paths of prior divergence and cospeciation to eventually become associated with the extant host species. Only one significant instance of host transfer is supported, the transfer of the ancestral L1 gene between a Primate and Hystricognathi host based on the divergence times between the υ human type 41 and porcupine PVs. PMID:20093429

  20. Sampling Small Mammals in Southeastern Forests: The Importance of Trapping in Trees

    SciTech Connect

    Loeb, S.C.; Chapman, G.L.; Ridley, T.R.

    1999-01-01

    We investigated the effect of sampling methodology on the richness and abundance of small mammal communities in loblolly pine forests. Trapping in trees using Sherman live traps was included along with routine ground trapping using the same device. Estimates of species richness did not differ among samples in which tree traps were included or excluded. However, diversity indeces (Shannon-Wiener, Simpson, Shannon and Brillouin) were strongly effected. The indeces were significantly greater than if tree samples were included primarily the result of flying squirrel captures. Without tree traps, the results suggested that cotton mince dominated the community. We recommend that tree traps we included in sampling.

  1. SU-E-T-491: Importance of Energy Dependent Protons Per MU Calibration Factors in IMPT Dose Calculations Using Monte Carlo Technique

    SciTech Connect

    Randeniya, S; Mirkovic, D; Titt, U; Guan, F; Mohan, R

    2014-06-01

    Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Of the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations

  2. Bandpass Sampling--An Opportunity to Stress the Importance of In-Depth Understanding

    ERIC Educational Resources Information Center

    Stern, Harold P. E.

    2010-01-01

    Many bandpass signals can be sampled at rates lower than the Nyquist rate, allowing significant practical advantages. Illustrating this phenomenon after discussing (and proving) Shannon's sampling theorem provides a valuable opportunity for an instructor to reinforce the principle that innovation is possible when students strive to have a complete…

  3. 40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...

  4. 40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this..., 2015, to determine its benzene concentration for compliance with the requirements of this...

  5. 40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...

  6. An Efficient Independence Sampler for Updating Branches in Bayesian Markov chain Monte Carlo Sampling of Phylogenetic Trees.

    PubMed

    Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik

    2016-01-01

    Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes. PMID:26231183

  7. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  8. Sampling errors for satellite-derived tropical rainfall: Monte Carlo study using a space-time stochastic model

    SciTech Connect

    Bell, T.L. ); Abdullah, A.; Martin, R.L. ); North, G.R. )

    1990-02-28

    Estimates of monthly average rainfall based on satellite observations from a low Earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The authors estimate the size of this error for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). They first examine in detail the statistical description of rainfall on scales from 1 to 10{sup 3} km, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10% of the mean for rainfall averaged over a 500 {times} 500 km{sup 2} area.

  9. The Importance of Sample Processing in Analysis of Asbestos Content in Rocks and Soils

    NASA Astrophysics Data System (ADS)

    Neumann, R. D.; Wright, J.

    2012-12-01

    Analysis of asbestos content in rocks and soils using Air Resources Board (ARB) Test Method 435 (M435) involves the processing of samples for subsequent analysis by polarized light microscopy (PLM). The use of different equipment and procedures by commercial laboratories to pulverize rock and soil samples could result in different particle size distributions. It has long been theorized that asbestos-containing samples can be over-pulverized to the point where the particle dimensions of the asbestos no longer meet the required 3:1 length-to-width aspect ratio or the particles become so small that they no longer can be tested for optical characteristics using PLM where maximum PLM magnification is typically 400X. Recent work has shed some light on this issue. ARB staff conducted an interlaboratory study to investigate variability in preparation and analytical procedures used by laboratories performing M435 analysis. With regard to sample processing, ARB staff found that different pulverization equipment and processing procedures produced powders that have varying particle size distributions. PLM analysis of the finest powders produced by one laboratory showed all but one of the 12 samples were non-detect or below the PLM reporting limit; in contrast to the other 36 coarser samples from the same field sample and processed by three other laboratories where 21 samples were above the reporting limit. The set of 12, exceptionally fine powder samples produced by the same laboratory was re-analyzed by transmission electron microscopy (TEM) and results showed that these samples contained asbestos above the TEM reporting limit. However, the use of TEM as a stand-alone analytical procedure, usually performed at magnifications between 3,000 to 20,000X, also has its drawbacks because of the miniscule mass of sample that this method examines. The small amount of powder analyzed by TEM may not be representative of the field sample. The actual mass of the sample powder analyzed by

  10. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certify that the procedures meet the requirements of the ASTM procedures required under 40 CFR 80.330. (d... plus a sample of the ethanol used to conduct the handblend testing pursuant to § 80.69 must be retained....

  11. Importance of covariance components of waveform data with high sampling rate in seismic source inversion

    NASA Astrophysics Data System (ADS)

    Yagi, Y.; Fukahata, Y.

    2007-12-01

    As computer technology advanced, it has become possible to observe seismic wave with a higher sampling rate and perform inversion for a larger data set. In general, to obtain a finer image of seismic source processes, waveform data with a higher sampling rate are needed. Then we encounter a problem whether there is no limitation of sampling rate in waveform inversion. In traditional seismic source inversion, covariance components of sampled waveform data have commonly been neglected. In fact, however, observed waveform data are not completely independent of each other at least in time domain, because they are always affected by un-elastic attenuation in the propagation of seismic waves through the Earth. In this study, we have developed a method of seismic source inversion to take the data covariance into account, and applied it to teleseismic P-wave data of the 2003 Boumerdes-Zemmouri, Algeria earthquake. From a comparison of final slip distributions inverted by the new formulation and the traditional formulation, we found that the effect of covariance components is crucial for a data set of higher sampling rates (≥ 5 Hz). For higher sampling rates, the slip distributions by the new formulation look stable, whereas the slip distributions by the traditional formulation tend to concentrate into small patches due to overestimation of the information from observed data. Our result indicates that the un-elastic effect of the Earth gives a limitation to the resolution of inverted seismic source models. It has been pointed out that seismic source models obtained from waveform data analyses are quite different from one another. One possible reason for the discrepancy is the neglect of covariance components. The new formulation must be useful to obtain a standard seismic source model.

  12. On the importance of sampling variance to investigations of temporal variation in animal population size

    USGS Publications Warehouse

    Link, W.A.; Nichols, J.D.

    1994-01-01

    Our purpose here is to emphasize the need to properly deal with sampling variance when studying population variability and to present a means of doing so. We present an estimator for temporal variance of population size for the general case in which there are both sampling variances and covariances associated with estimates of population size. We illustrate the estimation approach with a series of population size estimates for black-capped chickadees (Parus atricapillus) wintering in a Connecticut study area and with a series of population size estimates for breeding populations of ducks in southwestern Manitoba.

  13. Sampling efficacy for the imported fire ant Solenopsis invicta (Hymenoptera: Formicidae)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The cost-effective detection of incipient invasive ant colonies before their establishment in new ranges is imperative for reducing their global impact and protection of national borders. We examined the sampling efficiency of food-baits, baited and un-baited pitfall traps in detecting isolated red ...

  14. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certify that the procedures meet the requirements of the ASTM procedures required under 40 CFR 80.330. (d... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline...

  15. IMPORTANCE OF SAMPLE PH ON RECOVERY OF MUTAGENICITY FROM DRINKING WATER BY XAD RESINS

    EPA Science Inventory

    Sample pH and the presence of a chlorine residual were evaluated for their effects on the recovery of mutagenicity in drinking water following concentration by XAD resins. The levels of mutagenicity in the pH 2 concentrates were 7-8 fold higher than those of the pH 8 concentrates...

  16. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... certify that the procedures meet the requirements of the ASTM procedures required under 40 CFR 80.330. (d... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline...

  17. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... certify that the procedures meet the requirements of the ASTM procedures required under 40 CFR 80.330. (d... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline...

  18. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... certify that the procedures meet the requirements of the ASTM procedures required under 40 CFR 80.330. (d... 40 Protection of Environment 17 2014-07-01 2014-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline...

  19. Disclosing the Radio Loudness Distribution Dichotomy in Quasars: An Unbiased Monte Carlo Approach Applied to the SDSS-FIRST Quasar Sample

    NASA Astrophysics Data System (ADS)

    Baloković, M.; Smolčić, V.; Ivezić, Ž.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.

    2012-11-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 ± 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  20. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    SciTech Connect

    Balokovic, M.; Smolcic, V.; Ivezic, Z.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.

    2012-11-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  1. 19 CFR 19.8 - Examination of goods by importer; sampling; repacking; examination of merchandise by prospective...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...; repacking; examination of merchandise by prospective purchasers. 19.8 Section 19.8 Customs Duties U.S... goods by importer; sampling; repacking; examination of merchandise by prospective purchasers. Importers... conduct of Customs business and no danger to the revenue prospective purchaser may be permitted to...

  2. Importance of sampling design and analysis in animal population studies: a comment on Sergio et al

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew; Schmid, H.

    2008-01-01

    1. The use of predators as indicators and umbrellas in conservation has been criticized. In the Trentino region, Sergio et al. (2006; hereafter SEA) counted almost twice as many bird species in quadrats located in raptor territories than in controls. However, SEA detected astonishingly few species. We used contemporary Swiss Breeding Bird Survey data from an adjacent region and a novel statistical model that corrects for overlooked species to estimate the expected number of bird species per quadrat in that region. 2. There are two anomalies in SEA which render their results ambiguous. First, SEA detected on average only 6.8 species, whereas a value of 32 might be expected. Hence, they probably overlooked almost 80% of all species. Secondly, the precision of their mean species counts was greater in two-thirds of cases than in the unlikely case that all quadrats harboured exactly the same number of equally detectable species. This suggests that they detected consistently only a biased, unrepresentative subset of species. 3. Conceptually, expected species counts are the product of true species number and species detectability p. Plenty of factors may affect p, including date, hour, observer, previous knowledge of a site and mobbing behaviour of passerines in the presence of predators. Such differences in p between raptor and control quadrats could have easily created the observed effects. Without a method that corrects for such biases, or without quantitative evidence that species detectability was indeed similar between raptor and control quadrats, the meaning of SEA's counts is hard to evaluate. Therefore, the evidence presented by SEA in favour of raptors as indicator species for enhanced levels of biodiversity remains inconclusive. 4. Synthesis and application. Ecologists should pay greater attention to sampling design and analysis in animal population estimation. Species richness estimation means sampling a community. Samples should be representative for the

  3. Importance of closely spaced vertical sampling in delineating chemical and microbiological gradients in groundwater studies

    USGS Publications Warehouse

    Smith, R.L.; Harvey, R.W.; LeBlanc, D.R.

    1991-01-01

    Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, U.S.A. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume. A 27-fold change in bacterial abundance; a 35-fold change in frequency of dividing cells (FDC), an indicator of bacterial growth; a 23-fold change in 3H-glucose uptake, a measure of heterotrophic activity; and substantial changes in overall cell morphology were evident within a 9-m vertical interval at 250 m downgradient. The existence of these gradients argues for the need for closely spaced vertical sampling in groundwater studies because small differences in the vertical placement of a well screen can lead to incorrect conclusions about the chemical and microbiological processes within an aquifer.Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, USA. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume

  4. Sampling of Organic Solutes in Aqueous and Heterogeneous Environments Using Oscillating Excess Chemical Potentials in Grand Canonical-like Monte Carlo-Molecular Dynamics Simulations

    PubMed Central

    2015-01-01

    Solute sampling of explicit bulk-phase aqueous environments in grand canonical (GC) ensemble simulations suffer from poor convergence due to low insertion probabilities of the solutes. To address this, we developed an iterative procedure involving Grand Canonical-like Monte Carlo (GCMC) and molecular dynamics (MD) simulations. Each iteration involves GCMC of both the solutes and water followed by MD, with the excess chemical potential (μex) of both the solute and the water oscillated to attain their target concentrations in the simulation system. By periodically varying the μex of the water and solutes over the GCMC-MD iterations, solute exchange probabilities and the spatial distributions of the solutes improved. The utility of the oscillating-μex GCMC-MD method is indicated by its ability to approximate the hydration free energy (HFE) of the individual solutes in aqueous solution as well as in dilute aqueous mixtures of multiple solutes. For seven organic solutes: benzene, propane, acetaldehyde, methanol, formamide, acetate, and methylammonium, the average μex of the solutes and the water converged close to their respective HFEs in both 1 M standard state and dilute aqueous mixture systems. The oscillating-μex GCMC methodology is also able to drive solute sampling in proteins in aqueous environments as shown using the occluded binding pocket of the T4 lysozyme L99A mutant as a model system. The approach was shown to satisfactorily reproduce the free energy of binding of benzene as well as sample the functional group requirements of the occluded pocket consistent with the crystal structures of known ligands bound to the L99A mutant as well as their relative binding affinities. PMID:24932136

  5. Characterization of spinal cord lesions in cattle and horses with rabies: the importance of correct sampling.

    PubMed

    Bassuino, Daniele M; Konradt, Guilherme; Cruz, Raquel A S; Silva, Gustavo S; Gomes, Danilo C; Pavarini, Saulo P; Driemeier, David

    2016-07-01

    Twenty-six cattle and 7 horses were diagnosed with rabies. Samples of brain and spinal cord were processed for hematoxylin and eosin staining and immunohistochemistry (IHC). In addition, refrigerated fragments of brain and spinal cord were tested by direct fluorescent antibody test and intracerebral inoculation in mice. Statistical analyses and Fisher exact test were performed by commercial software. Histologic lesions were observed in the spinal cord in all of the cattle and horses. Inflammatory lesions in horses were moderate at the thoracic, lumbar, and sacral levels, and marked at the lumbar enlargement level. Gitter cells were present in large numbers in the lumbar enlargement region. IHC staining intensity ranged from moderate to strong. Inflammatory lesions in cattle were moderate in all spinal cord sections, and gitter cells were present in small numbers. IHC staining intensity was strong in all spinal cord sections. Only 2 horses exhibited lesions in the brain, which were located mainly in the obex and cerebellum; different from that observed in cattle, which had lesions in 25 cases. Fisher exact test showed that the odds of detecting lesions caused by rabies in horses are 3.5 times higher when spinal cord sections are analyzed, as compared to analysis of brain samples alone. PMID:27240569

  6. De novo mutations from sporadic schizophrenia cases highlight important signaling genes in an independent sample.

    PubMed

    Kranz, Thorsten M; Harroch, Sheila; Manor, Orly; Lichtenberg, Pesach; Friedlander, Yechiel; Seandel, Marco; Harkavy-Friedman, Jill; Walsh-Messinger, Julie; Dolgalev, Igor; Heguy, Adriana; Chao, Moses V; Malaspina, Dolores

    2015-08-01

    Schizophrenia is a debilitating syndrome with high heritability. Genomic studies reveal more than a hundred genetic variants, largely nonspecific and of small effect size, and not accounting for its high heritability. De novo mutations are one mechanism whereby disease related alleles may be introduced into the population, although these have not been leveraged to explore the disease in general samples. This paper describes a framework to find high impact genes for schizophrenia. This study consists of two different datasets. First, whole exome sequencing was conducted to identify disruptive de novo mutations in 14 complete parent-offspring trios with sporadic schizophrenia from Jerusalem, which identified 5 sporadic cases with de novo gene mutations in 5 different genes (PTPRG, TGM5, SLC39A13, BTK, CDKN3). Next, targeted exome capture of these genes was conducted in 48 well-characterized, unrelated, ethnically diverse schizophrenia cases, recruited and characterized by the same research team in New York (NY sample), which demonstrated extremely rare and potentially damaging variants in three of the five genes (MAF<0.01) in 12/48 cases (25%); including PTPRG (5 cases), SCL39A13 (4 cases) and TGM5 (4 cases), a higher number than usually identified by whole exome sequencing. Cases differed in cognition and illness features based on which mutation-enriched gene they carried. Functional de novo mutations in protein-interaction domains in sporadic schizophrenia can illuminate risk genes that increase the propensity to develop schizophrenia across ethnicities. PMID:26091878

  7. The importance of effective sampling for exploring the population dynamics of haploid-diploid seaweeds.

    PubMed

    Krueger-Hadfield, Stacy A; Hoban, Sean M

    2016-02-01

    The mating system partitions genetic diversity within and among populations and the links between life history traits and mating systems have been extensively studied in diploid organisms. As such most evolutionary theory is focused on species for which sexual reproduction occurs between diploid male and diploid female individuals. However, there are many multicellular organisms with biphasic life cycles in which the haploid stage is prolonged and undergoes substantial somatic development. In particular, biphasic life cycles are found across green, brown and red macroalgae. Yet, few studies have addressed the population structure and genetic diversity in both the haploid and diploid stages in these life cycles. We have developed some broad guidelines with which to develop population genetic studies of haploid-diploid macroalgae and to quantify the relationship between power and sampling strategy. We address three common goals for studying macroalgal population dynamics, including haploid-diploid ratios, genetic structure and paternity analyses. PMID:26987084

  8. Antimicrobial activity of different solvent extracted samples from the flowers of medicinally important Plumeria obstusa.

    PubMed

    Ali, Nasir; Ahmad, Dawood; Bakht, Jehan

    2015-01-01

    The present research work was carried out to investigate the antimicrobial (eight bacteria and one fungus) activities of different solvent (ethanol, petroleum ether, chloroform, ethyl acetate and isobutanol) extracted samples from flowers of P. obstusa by disc diffusion method. Analysis of the data revealed that all the five extracts from flowers of P. obstusa showed different ranges of antimicrobial activities. Petroleum ether fractions showed inhibitory activities against all the nine microbial species except Klebsiella pneumonia. Ethyl acetate and isobutanol fractions showed inhibitory effects against all the tested microbial species except Pseudomonas aeruginosa. Chloroform and ethanol extracts had varying levels of inhibitions against all of the tested microorganisms. The most susceptible gram positive bacterium was Bacillus subtilis which was inhibited by all the five extracts while the most resistant gram positive bacterium was Staphylococcus aureus. Erwinia carotovora was the most susceptible gram negative bacterium while Pseudomonas aeruginosa was highly resistant among the gram negative bacteria. PMID:25553696

  9. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ródenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.

    2003-01-01

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  10. Identification of oil residues in Roman amphorae (Monte Testaccio, Rome): a comparative FTIR spectroscopic study of archeological and artificially aged samples.

    PubMed

    Tarquini, Gabriele; Nunziante Cesaro, Stella; Campanella, Luigi

    2014-01-01

    The application of Fourier Transform InfraRed (FTIR) spectroscopy to the analysis of oil residues in fragments of archeological amphorae (3rd century A.D.) from Monte Testaccio (Rome, Italy) is reported. In order to check the possibility to reveal the presence of oil residues in archeological pottery using microinvasive and\\or not invasive techniques, different approaches have been followed: firstly, FTIR spectroscopy was used to study oil residues extracted from roman amphorae. Secondly, the presence of oil residues was ascertained analyzing microamounts of archeological fragments with the Diffuse Reflectance Infrared Spectroscopy (DRIFT). Finally, the external reflection analysis of the ancient shards was performed without preliminary treatments evidencing the possibility to detect oil traces through the observation of the most intense features of its spectrum. Incidentally, the existence of carboxylate salts of fatty acids was also observed in DRIFT and Reflectance spectra of archeological samples supporting the roman habit of spreading lime over the spoil heaps. The data collected in all steps were always compared with results obtained on purposely made replicas. PMID:24274288

  11. Pre-conditioned backward Monte Carlo solutions to radiative transport in planetary atmospheres. Fundamentals: Sampling of propagation directions in polarising media

    NASA Astrophysics Data System (ADS)

    García Muñoz, A.; Mills, F. P.

    2015-01-01

    Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better

  12. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  13. 40 CFR 80.583 - What alternative sampling and testing requirements apply to importers who transport motor vehicle...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What alternative sampling and testing requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  14. 40 CFR 80.583 - What alternative sampling and testing requirements apply to importers who transport motor vehicle...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Sampling and Testing § 80.583...

  15. Monte Carlo burnup code acceleration with the correlated sampling method. Preliminary test on an UOX cell with TRIPOLI-4{sup R}

    SciTech Connect

    Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M.

    2013-07-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)

  16. Extending canonical Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  17. Clinical importance and representation of toxigenic and non-toxigenic Clostridium difficile cultivated from stool samples of hospitalized patients

    PubMed Central

    Predrag, Stojanovic; Branislava, Kocic; Miodrag, Stojanovic; Biljana, Miljkovic – Selimovic; Suzana, Tasic; Natasa, Miladinovic – Tasic; Tatjana, Babic

    2012-01-01

    The aim of this study was to fortify the clinical importance and representation of toxigenic and non-toxigenic Clostridium difficile isolated from stool samples of hospitalized patients. This survey included 80 hospitalized patients with diarrhea and positive findings of Clostridium difficile in stool samples, and 100 hospitalized patients with formed stool as a control group. Bacteriological examination of a stool samples was conducted using standard microbiological methods. Stool sample were inoculated directly on nutrient media for bacterial cultivation (blood agar using 5% sheep blood, Endo agar, selective Salmonella Shigella agar, Selenite-F broth, CIN agar and Skirrow’s medium), and to selective cycloserine-cefoxitin-fructose agar (CCFA) (Biomedics, Parg qe tehnicologico, Madrid, Spain) for isolation of Clostridium difficile. Clostridium difficile toxin was detected by ELISA-ridascreen Clostridium difficile Toxin A/B (R-Biopharm AG, Germany) and ColorPAC ToxinA test (Becton Dickinson, USA). Examination of stool specimens for the presence of parasites (causing diarrhea) was done using standard methods (conventional microscopy), commercial concentration test Paraprep S Gold kit (Dia Mondial, France) and RIDA®QUICK Cryptosporidium/Giardia Combi test (R-Biopharm AG, Germany). Examination of stool specimens for the presence of fungi (causing diarrhea) was performed by standard methods. All stool samples positive for Clostridium difficile were tested for Rota, Noro, Astro and Adeno viruses by ELISA – ridascreen (R-Biopharm AG, Germany). In this research we isolated 99 Clostridium difficile strains from 116 stool samples of 80 hospitalized patients with diarrhea. The 53 (66.25%) of patients with diarrhea were positive for toxins A and B, one (1.25%) were positive for only toxin B. Non-toxigenic Clostridium difficile isolated from samples of 26 (32.5%) patients. However, other pathogenic microorganisms of intestinal tract cultivated from samples of 16 patients

  18. Monte Carlo Simulation for Perusal and Practice.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.

    The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…

  19. Monte Carlo techniques for real-time quantum dynamics

    SciTech Connect

    Dowling, Mark R. . E-mail: dowling@physics.uq.edu.au; Davis, Matthew J.; Drummond, Peter D.; Corney, Joel F.

    2007-01-10

    The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the 'weight', and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The Monte-Carlo algorithms are applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.

  20. THE IMPORTANCE OF THE MAGNETIC FIELD FROM AN SMA-CSO-COMBINED SAMPLE OF STAR-FORMING REGIONS

    SciTech Connect

    Koch, Patrick M.; Tang, Ya-Wen; Ho, Paul T. P.; Chen, Huei-Ru Vivien; Liu, Hau-Yu Baobab; Yen, Hsi-Wei; Lai, Shih-Ping; Zhang, Qizhou; Chen, How-Huan; Ching, Tao-Chung; Girart, Josep M.; Frau, Pau; Li, Hua-Bai; Li, Zhi-Yun; Padovani, Marco; Qiu, Keping; Rao, Ramprasad

    2014-12-20

    Submillimeter dust polarization measurements of a sample of 50 star-forming regions, observed with the Submillimeter Array (SMA) and the Caltech Submillimeter Observatory (CSO) covering parsec-scale clouds to milliparsec-scale cores, are analyzed in order to quantify the magnetic field importance. The magnetic field misalignment δ—the local angle between magnetic field and dust emission gradient—is found to be a prime observable, revealing distinct distributions for sources where the magnetic field is preferentially aligned with or perpendicular to the source minor axis. Source-averaged misalignment angles (|δ|) fall into systematically different ranges, reflecting the different source-magnetic field configurations. Possible bimodal (|δ|) distributions are found for the separate SMA and CSO samples. Combining both samples broadens the distribution with a wide maximum peak at small (|δ|) values. Assuming the 50 sources to be representative, the prevailing source-magnetic field configuration is one that statistically prefers small magnetic field misalignments |δ|. When interpreting |δ| together with a magnetohydrodynamics force equation, as developed in the framework of the polarization-intensity gradient method, a sample-based log-linear scaling fits the magnetic field tension-to-gravity force ratio (Σ {sub B}) versus (|δ|) with (Σ {sub B}) = 0.116 · exp (0.047 · (|δ|)) ± 0.20 (mean error), providing a way to estimate the relative importance of the magnetic field, only based on measurable field misalignments |δ|. The force ratio Σ {sub B} discriminates systems that are collapsible on average ((Σ {sub B}) < 1) from other molecular clouds where the magnetic field still provides enough resistance against gravitational collapse ((Σ {sub B}) > 1). The sample-wide trend shows a transition around (|δ|) ≈ 45°. Defining an effective gravitational force ∼1 – (Σ {sub B}), the average magnetic-field-reduced star formation efficiency is at least a

  1. Monte Carlo and quasi-Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Caflisch, Russel E.

    Monte Carlo is one of the most versatile and widely used numerical methods. Its convergence rate, O(N-1/2), is independent of dimension, which shows Monte Carlo to be very robust but also slow. This article presents an introduction to Monte Carlo methods for integration problems, including convergence theory, sampling methods and variance reduction techniques. Accelerated convergence for Monte Carlo quadrature is attained using quasi-random (also called low-discrepancy) sequences, which are a deterministic alternative to random or pseudo-random sequences. The points in a quasi-random sequence are correlated to provide greater uniformity. The resulting quadrature method, called quasi-Monte Carlo, has a convergence rate of approximately O((logN)kN-1). For quasi-Monte Carlo, both theoretical error estimates and practical limitations are presented. Although the emphasis in this article is on integration, Monte Carlo simulation of rarefied gas dynamics is also discussed. In the limit of small mean free path (that is, the fluid dynamic limit), Monte Carlo loses its effectiveness because the collisional distance is much less than the fluid dynamic length scale. Computational examples are presented throughout the text to illustrate the theory. A number of open problems are described.

  2. Importance of participation rate in sampling of data in population based studies, with special reference to bone mass in Sweden.

    PubMed Central

    Düppe, H; Gärdsell, P; Hanson, B S; Johnell, O; Nilsson, B E

    1996-01-01

    OBJECTIVE: To study the effects of participation rate in sampling on "normative" bone mass data. DESIGN: This was a comparison between two randomly selected samples from the same population. The participation rates in the two samples were 61.9% and 83.6%. Measurements were made of bone mass at different skeletal sites and of muscle strength, as well as an assessment of physical activity. SETTING: Malmö, Sweden. SUBJECTS: There were 230 subjects (117 men, 113 women), aged 21 to 42 years. RESULTS: Many subjects participated in both studies (163). Those who took part only in the study with the higher participation rate (67) almost invariably had higher values for bone mass density at the sites measured (up to 7.6% for men) than participants in the study with the lower participation rate. No differences in muscle strength were recorded. CONCLUSION: A high degree of compliance is important to achieve a reliable result in determining normal values in population based studies. PMID:8762383

  3. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    SciTech Connect

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-03-15

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  4. The Importance of Sample Return in Establishing Chemical Evidence for Life on Mars or Other Solar System Bodies

    NASA Technical Reports Server (NTRS)

    Glavin, D. P.; Conrad, P.; Dworkin, J. P.; Eigenbrode, J.; Mahaffy, P. R.

    2011-01-01

    The search for evidence of life on Mars and elsewhere will continue to be one of the primary goals of NASA s robotic exploration program over the next decade. NASA and ESA are currently planning a series of robotic missions to Mars with the goal of understanding its climate, resources, and potential for harboring past or present life. One key goal will be the search for chemical biomarkers including complex organic compounds important in life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, nucleobases and sugars which form the backbone of DNA and RNA, and lipids, the structural components of cell membranes. Many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1], though, their molecular characteristics may distinguish a biological source [2]. It is possible that in situ instruments may reveal such characteristics, however, return of the right sample (i.e. one with biosignatures or having a high probability of biosignatures) to Earth would allow for more intensive laboratory studies using a broad array of powerful instrumentation for bulk characterization, molecular detection, isotopic and enantiomeric compositions, and spatially resolved chemistry that may be required for confirmation of extant or extinct Martian life. Here we will discuss the current analytical capabilities and strategies for the detection of organics on the Mars Science Laboratory (MSL) using the Sample Analysis at Mars (SAM) instrument suite and how sample return missions from Mars and other targets of astrobiological interest will help advance our understanding of chemical biosignatures in the solar system.

  5. Stable isotope probing reveals the importance of Comamonas and Pseudomonadaceae in RDX degradation in samples from a Navy detonation site.

    PubMed

    Jayamani, Indumathy; Cupples, Alison M

    2015-07-01

    This study investigated the microorganisms involved in hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) degradation from a detonation area at a Navy base. Using Illumina sequencing, microbial communities were compared between the initial sample, samples following RDX degradation, and controls not amended with RDX to determine which phylotypes increased in abundance following RDX degradation. The effect of glucose on these communities was also examined. In addition, stable isotope probing (SIP) using labeled ((13)C3, (15)N3-ring) RDX was performed. Illumina sequencing revealed that several phylotypes were more abundant following RDX degradation compared to the initial soil and the no-RDX controls. For the glucose-amended samples, this trend was strong for an unclassified Pseudomonadaceae phylotype and for Comamonas. Without glucose, Acinetobacter exhibited the greatest increase following RDX degradation compared to the initial soil and no-RDX controls. Rhodococcus, a known RDX degrader, also increased in abundance following RDX degradation. For the SIP study, unclassified Pseudomonadaceae was the most abundant phylotype in the heavy fractions in both the presence and absence of glucose. In the glucose-amended heavy fractions, the 16S ribosomal RNA (rRNA) genes of Comamonas and Anaeromxyobacter were also present. Without glucose, the heavy fractions also contained the 16S rRNA genes of Azohydromonas and Rhodococcus. However, all four phylotypes were present at a much lower level compared to unclassified Pseudomonadaceae. Overall, these data indicate that unclassified Pseudomonadaceae was primarily responsible for label uptake in both treatments. This study indicates, for the first time, the importance of Comamonas for RDX removal. PMID:25721530

  6. ADAPTIVE ANNEALED IMPORTANCE SAMPLING FOR MULTIMODAL POSTERIOR EXPLORATION AND MODEL SELECTION WITH APPLICATION TO EXTRASOLAR PLANET DETECTION

    SciTech Connect

    Liu, Bin

    2014-07-01

    We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.

  7. An atlas of selected beta-ray spectra and depth-dose distributions in lithium fluoride and soft tissue generated by a fast Monte-Carlo-based sampling method

    NASA Astrophysics Data System (ADS)

    Samei, Ehsan; Kearfott, Kimberlee J.; Gillespie, Timothy J.; Chris Wang, C.-K.

    1996-12-01

    A method to generate depth-dose distributions due to beta radiation in LiF and soft tissue is proposed. In this method, the EGS4 Monte Carlo radiation transport code is initially used to generate a library of monoenergetic electron depth-dose distributions in the material for electron energies in the range of 10 keV to 5 MeV in 10 keV increments. A polynomial least-squares fit is applied to each distribution. In addition, a theoretical model is developed to generate beta-ray energy spectra of selected radionuclides. A standard Monte Carlo random sampling technique is then employed to sample the spectra and generate the depth-dose distributions in LiF and soft tissue. The proposed method has an advantage over more traditional methods in that the actual radiation transport in the media is performed only once for a set of monoenergetic cases and the beta depth-dose distributions are easily generated by sampling this previously-acquired database in a matter of minutes. This method therefore reduces the demand on computer resources and time. The method can be used to calculate depth-dose distribution due to any beta-emitting nuclide or combination of nuclides with up to ten beta components.

  8. Sexual violence and HIV risk behaviors among a nationally representative sample of heterosexual American women: The importance of sexual coercion

    PubMed Central

    Stockman, Jamila K; Campbell, Jacquelyn C; Celentano, David D

    2009-01-01

    Objectives Recent evidence suggests that it is important to consider behavioral-specific sexual violence measures in assessing women’s risk behaviors. This study investigated associations of history and types of sexual coercion on HIV risk behaviors in a nationally representative sample of heterosexually active American women. Methods Analyses were based on 5,857 women aged 18–44 participating in the 2002 National Survey of Family Growth. Types of lifetime sexual coercion included: victim given alcohol or drugs, verbally pressured, threatened with physical injury, and physically injured. Associations with HIV risk behaviors were assessed using logistic regression. Results Of 5,857 heterosexually active women, 16.4% reported multiple sex partners and 15.3% reported substance abuse. A coerced first sexual intercourse experience and coerced sex after sexual debut were independently associated with multiple sex partners and substance abuse; the highest risk was observed for women reporting a coerced first sexual intercourse experience. Among types of sexual coercion, alcohol or drug use at coerced sex was independently associated with multiple sex partners and substance abuse. Conclusions Our findings suggest that public health strategies are needed to address the violent components of heterosexual relationships. Future research should utilize longitudinal and qualitative research to characterize the relationship between continuums of sexual coercion and HIV risk. PMID:19734802

  9. Nasal swab samples and real-time polymerase chain reaction assays in community-based, longitudinal studies of respiratory viruses: the importance of sample integrity and quality control

    PubMed Central

    2014-01-01

    Background Carefully conducted, community-based, longitudinal studies are required to gain further understanding of the nature and timing of respiratory viruses causing infections in the population. However, such studies pose unique challenges for field specimen collection, including as we have observed the appearance of mould in some nasal swab specimens. We therefore investigated the impact of sample collection quality and the presence of visible mould in samples upon respiratory virus detection by real-time polymerase chain reaction (PCR) assays. Methods Anterior nasal swab samples were collected from infants participating in an ongoing community-based, longitudinal, dynamic birth cohort study. The samples were first collected from each infant shortly after birth and weekly thereafter. They were then mailed to the laboratory where they were catalogued, stored at -80°C and later screened by PCR for 17 respiratory viruses. The quality of specimen collection was assessed by screening for human deoxyribonucleic acid (DNA) using endogenous retrovirus 3 (ERV3). The impact of ERV3 load upon respiratory virus detection and the impact of visible mould observed in a subset of swabs reaching the laboratory upon both ERV3 loads and respiratory virus detection was determined. Results In total, 4933 nasal swabs were received in the laboratory. ERV3 load in nasal swabs was associated with respiratory virus detection. Reduced respiratory virus detection (odds ratio 0.35; 95% confidence interval 0.27-0.44) was observed in samples where the ERV3 could not be identified. Mould was associated with increased time of samples reaching the laboratory and reduced ERV3 loads and respiratory virus detection. Conclusion Suboptimal sample collection and high levels of visible mould can impact negatively upon sample quality. Quality control measures, including monitoring human DNA loads using ERV3 as a marker for epithelial cell components in samples should be undertaken to optimize the

  10. Reverse Monte Carlo study of spherical sample under non-periodic boundary conditions: the structure of Ru nanoparticles based on x-ray diffraction data

    NASA Astrophysics Data System (ADS)

    Gereben, Orsolya; Petkov, Valeri

    2013-11-01

    A new method to fit experimental diffraction data with non-periodic structure models for spherical particles was implemented in the reverse Monte Carlo simulation code. The method was tested on x-ray diffraction data for ruthenium (Ru) nanoparticles approximately 5.6 nm in diameter. It was found that the atomic ordering in the ruthenium nanoparticles is quite distorted, barely resembling the hexagonal structure of bulk Ru. The average coordination number for the bulk decreased from 12 to 11.25. A similar lack of structural order has been observed with other nanoparticles (e.g. Petkov et al 2008 J. Phys. Chem. C 112 8907-11) indicating that atomic disorder is a widespread feature of nanoparticles less than 10 nm in diameter.

  11. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design

    PubMed Central

    2014-01-01

    Background In biomedical research, response variables are often encountered which have bounded support on the open unit interval - (0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. Methods In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. Results If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar

  12. Monte Carlo Integration Using Spatial Structure of Markov Random Field

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki

    2015-03-01

    Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.

  13. RESULTS FROM EPA FUNDED RESEARCH PROGRAMS ON THE IMPORTANCE OF PURGE VOLUME, SAMPLE VOLUME, SAMPLE FLOW RATE AND TEMPORAL VARIATIONS ON SOIL GAS CONCENTRATIONS

    EPA Science Inventory

    Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...

  14. Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2011-01-01

    Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found "not" to have modeled the analyses…

  15. 40 CFR 80.1642 - Sampling and testing requirements for producers and importers of denatured fuel ethanol and other...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. 80... requirements for producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. Beginning January 1, 2017, producers and importers of denatured fuel ethanol (DFE) and...

  16. Development of "best practices" for sampling of an important surface-dwelling soil mite in pastoral landscapes.

    PubMed

    Nansen, Christian; Gumley, Jerome; Groves, Lloyd; Nansen, Maria; Severtson, Dustin; Ridsdill-Smith, Thomas James

    2015-07-01

    In this study, we analyzed 1145 vacuum samples of redlegged earth mites (RLEM) [Halotydeus destructor (Tucker) (Acari: Penthaleidae)] from 18 sampling events at six locations in pastoral landscapes of Western Australia during three growing seasons (2012-2014) (total of 228,299 RLEM individuals). The specific objectives were to determine: (1) presence/absence effects of a range of vegetation characteristics, (2) possible factors influencing RLEM sampling performance during the course of the season and day, (3) effects of size of area sampled and duration of sampling, (4) the spatial structure of RLEM counts in uniform pastoral vegetation, and (5) develop "best practices" regarding field-based vacuum sampling of surface dwelling soil mites in pastoral landscapes. We found that sampling of completely bare ground will lead to very low RLEM counts but spots with sparse vegetation (presence of bare ground) probably increases the presence of microhabitats for mites to shelter in and therefore lead to higher RLEM counts. RLEM counts were positively associated with the height of vegetation, at least up to about 15 cm in height. In early season (May-August), highest RLEM counts will be obtained in the afternoon hours (2-4 pm), whereas in late season sampling (August-November), highest RLEM counts will be obtained around noon. Higher RLEM counts should be expected from spots with grazed/mowed vegetation including cape weed and without presence of grasses and stubble. Variogram analyses of high-resolution data sets suggested that considerable range of spatial autocorrelation should be expected from fields with fairly uniform vegetation, especially if RLEM population densities are high. We are therefore recommending that samples are collected at least 30 m apart, if the objective is to obtain independent (spatially non-correlated) counts. The results from this study may be used to develop effective sampling protocols deployed in field ecology studies of soil surface dwelling

  17. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  18. 40 CFR 80.1645 - Sample retention requirements for producers and importers of denaturant designated as suitable...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... producers and importers of denaturant designated as suitable for the manufacture of denatured fuel ethanol... suitable for the manufacture of denatured fuel ethanol meeting federal quality requirements. Beginning January 1, 2017, or on the first day that any producer or importer of ethanol denaturant designates...

  19. 76 FR 65165 - Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-20

    ... plant part) for or capable of propagation, including a tree, a tissue culture, a plantlet culture... data to establish that the plants for planting present a medium or low risk. If a taxon of plants for planting from a certain country is determined to present a medium or low risk, it will be sampled at...

  20. Evaluating the performance of sampling plans to detect hypoglycin A in ackee fruit shipments imported into the United States

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hypoglycin A (HGA) is a toxic amino acid that is naturally produced in unripe ackee fruit. In 1973 the FDA placed a worldwide import alert on ackee fruit, which banned the product from entering the U.S. The FDA has considered establishing a regulatory limit for HGA and lifting the ban, which will re...

  1. DNA-flow cytometry of head and neck carcinoma: the importance of uniform tissue sampling and tumor sites.

    PubMed

    Westerbeek, H A; Mooi, W J; Begg, C; Dessing, M; Balm, A J

    1992-01-01

    Flow cytometric DNA ploidy measurements using deparaffinized tumor specimens were performed on 46 squamous cell carcinomas of the head and neck, including 22 carcinomas of the oropharynx, 18 carcinomas of the larynx and six carcinomas of the oral cavity. Aneuploidy was found in 14 of these tumors with carcinomas of the larynx and oral cavity showing almost equal percentages of DNA aneuploidy (10/18 and 3/6, respectively). In contrast, only 1 of the oropharyngeal carcinomas was aneuploid. Accurate microscopy-controlled sampling of tumor tissue from the histological tissue blocks was found to be mandatory in order to obtain reliable ploidy measurements. PMID:1642865

  2. Cluster analysis of molecular simulation trajectories for systems where both conformation and orientation of the sampled states are important.

    PubMed

    Abramyan, Tigran M; Snyder, James A; Thyparambil, Aby A; Stuart, Steven J; Latour, Robert A

    2016-08-01

    Clustering methods have been widely used to group together similar conformational states from molecular simulations of biomolecules in solution. For applications such as the interaction of a protein with a surface, the orientation of the protein relative to the surface is also an important clustering parameter because of its potential effect on adsorbed-state bioactivity. This study presents cluster analysis methods that are specifically designed for systems where both molecular orientation and conformation are important, and the methods are demonstrated using test cases of adsorbed proteins for validation. Additionally, because cluster analysis can be a very subjective process, an objective procedure for identifying both the optimal number of clusters and the best clustering algorithm to be applied to analyze a given dataset is presented. The method is demonstrated for several agglomerative hierarchical clustering algorithms used in conjunction with three cluster validation techniques. © 2016 Wiley Periodicals, Inc. PMID:27292100

  3. Monte Carlo Example Programs

    Energy Science and Technology Software Center (ESTSC)

    2006-05-09

    The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.

  4. Integration of morphological data sets for phylogenetic analysis of Amniota: the importance of integumentary characters and increased taxonomic sampling.

    PubMed

    Hill, Robert V

    2005-08-01

    Several mutually exclusive hypotheses have been advanced to explain the phylogenetic position of turtles among amniotes. Traditional morphology-based analyses place turtles among extinct anapsids (reptiles with a solid skull roof), whereas more recent studies of both morphological and molecular data support an origin of turtles from within Diapsida (reptiles with a doubly fenestrated skull roof). Evaluation of these conflicting hypotheses has been hampered by nonoverlapping taxonomic samples and the exclusion of significant taxa from published analyses. Furthermore, although data from soft tissues and anatomical systems such as the integument may be particularly relevant to this problem, they are often excluded from large-scale analyses of morphological systematics. Here, conflicting hypotheses of turtle relationships are tested by (1) combining published data into a supermatrix of morphological characters to address issues of character conflict and missing data; (2) increasing taxonomic sampling by more than doubling the number of operational taxonomic units to test internal relationships within suprageneric ingroup taxa; and (3) increasing character sampling by approximately 25% by adding new data on the osteology and histology of the integument, an anatomical system that has been historically underrepresented in morphological systematics. The morphological data set assembled here represents the largest yet compiled for Amniota. Reevaluation of character data from prior studies of amniote phylogeny favors the hypothesis that turtles indeed have diapsid affinities. Addition of new ingroup taxa alone leads to a decrease in overall phylogenetic resolution, indicating that existing characters used for amniote phylogeny are insufficient to explain the evolution of more highly nested taxa. Incorporation of new data from the soft and osseous components of the integument, however, helps resolve relationships among both basal and highly nested amniote taxa. Analysis of a

  5. Novel Hybrid Monte Carlo/Deterministic Technique for Shutdown Dose Rate Analyses of Fusion Energy Systems

    SciTech Connect

    Ibrahim, Ahmad M; Peplow, Douglas E.; Peterson, Joshua L; Grove, Robert E

    2013-01-01

    The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.

  6. Extra Chance Generalized Hybrid Monte Carlo

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; Sanz-Serna, J. M.

    2015-01-01

    We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.

  7. Quasi-Monte Carlo integration

    SciTech Connect

    Morokoff, W.J.; Caflisch, R.E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol`, and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1/2} in high dimensions. 21 refs., 6 figs., 5 tabs.

  8. Quasi-Monte Carlo Integration

    NASA Astrophysics Data System (ADS)

    Morokoff, William J.; Caflisch, Russel E.

    1995-12-01

    The standard Monte Carlo approach to evaluating multidimensional integrals using (pseudo)-random integration nodes is frequently used when quadrature methods are too difficult or expensive to implement. As an alternative to the random methods, it has been suggested that lower error and improved convergence may be obtained by replacing the pseudo-random sequences with more uniformly distributed sequences known as quasi-random. In this paper quasi-random (Halton, Sobol', and Faure) and pseudo-random sequences are compared in computational experiments designed to determine the effects on convergence of certain properties of the integrand, including variance, variation, smoothness, and dimension. The results show that variation, which plays an important role in the theoretical upper bound given by the Koksma-Hlawka inequality, does not affect convergence, while variance, the determining factor in random Monte Carlo, is shown to provide a rough upper bound, but does not accurately predict performance. In general, quasi-Monte Carlo methods are superior to random Monte Carlo, but the advantage may be slight, particularly in high dimensions or for integrands that are not smooth. For discontinuous integrands, we derive a bound which shows that the exponent for algebraic decay of the integration error from quasi-Monte Carlo is only slightly larger than {1}/{2} in high dimensions.

  9. Fault-slip accumulation in an active rift over thousands to millions of years and the importance of paleoearthquake sampling

    NASA Astrophysics Data System (ADS)

    Mouslopoulou, Vasiliki; Nicol, Andrew; Walsh, John; Begg, John; Townsend, Dougal; Hristopulos, Dionissios

    2013-04-01

    The catastrophic earthquakes that recently (September 4th, 2010 and February 22nd, 2011) hit Christchurch, New Zealand, show that active faults, capable of generating large-magnitude earthquakes, can be hidden beneath the Earth's surface. In this study we combine near-surface paleoseismic data with deep (<5 km) onshore seismic-reflection lines to explore the growth of normal faults over short (<27 kyr) and long (>1 Ma) timescales in the Taranaki Rift, New Zealand. Our analysis shows that the integration of different timescale datasets provides a basis for identifying active faults not observed at the ground surface, estimating maximum fault-rupture lengths, inferring maximum short-term displacement rates and improving earthquake hazard assessment. We find that fault displacement rates become increasingly irregular (both faster and slower) on shorter timescales, leading to incomplete sampling of the active-fault population. Surface traces have been recognised for <50% of the active faults and along ∼50% of their lengths. The similarity of along-strike displacement profiles for short and long time intervals suggests that fault lengths and maximum single-event displacements have not changed over the last 3.6 Ma. Therefore, rate changes are likely to reflect temporal adjustments in earthquake recurrence intervals due to fault interactions and associated migration of earthquake activity within the rift.

  10. Fault-slip accumulation in an active rift over thousands to millions of years and the importance of paleoearthquake sampling

    NASA Astrophysics Data System (ADS)

    Mouslopoulou, Vasiliki; Nicol, Andrew; Walsh, John J.; Begg, John G.; Townsend, Dougal B.; Hristopulos, Dionissios T.

    2012-03-01

    The catastrophic earthquakes that recently (September 4th, 2010 and February 22nd, 2011) hit Christchurch, New Zealand, show that active faults, capable of generating large-magnitude earthquakes, can be hidden beneath the Earth's surface. In this article we combine near-surface paleoseismic data with deep (<5 km) onshore seismic-reflection lines to explore the growth of normal faults over short (<27 kyr) and long (>1 Ma) timescales in the Taranaki Rift, New Zealand. Our analysis shows that the integration of different timescale datasets provides a basis for identifying active faults not observed at the ground surface, estimating maximum fault-rupture lengths, inferring maximum short-term displacement rates and improving earthquake hazard assessment. We find that fault displacement rates become increasingly irregular (both faster and slower) on shorter timescales, leading to incomplete sampling of the active-fault population. Surface traces have been recognised for <50% of the active faults and along ≤50% of their lengths. The similarity of along-strike displacement profiles for short and long time intervals suggests that fault lengths and maximum single-event displacements have not changed over the last 3.6 Ma. Therefore, rate changes are likely to reflect temporal adjustments in earthquake recurrence intervals due to fault interactions and associated migration of earthquake activity within the rift.

  11. Who art thou? Personality predictors of artistic preferences in a large UK sample: the importance of openness.

    PubMed

    Chamorro-Premuzic, Tomas; Reimers, Stian; Hsu, Anne; Ahmetoglu, Gorkan

    2009-08-01

    The present study examined individual differences in artistic preferences in a sample of 91,692 participants (60% women and 40% men), aged 13-90 years. Participants completed a Big Five personality inventory (Goldberg, 1999) and provided preference ratings for 24 different paintings corresponding to cubism, renaissance, impressionism, and Japanese art, which loaded on to a latent factor of overall art preferences. As expected, the personality trait openness to experience was the strongest and only consistent personality correlate of artistic preferences, affecting both overall and specific preferences, as well as visits to galleries, and artistic (rather than scientific) self-perception. Overall preferences were also positively influenced by age and visits to art galleries, and to a lesser degree, by artistic self-perception and conscientiousness (negatively). As for specific styles, after overall preferences were accounted for, more agreeable, more conscientious and less open individuals reported higher preference levels for impressionist, younger and more extraverted participants showed higher levels of preference for cubism (as did males), and younger participants, as well as males, reported higher levels of preferences for renaissance. Limitations and recommendations for future research are discussed. PMID:19026107

  12. Mapping Transmission Risk of Lassa Fever in West Africa: The Importance of Quality Control, Sampling Bias, and Error Weighting

    PubMed Central

    Peterson, A. Townsend; Moses, Lina M.; Bausch, Daniel G.

    2014-01-01

    Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk. PMID:25105746

  13. Mapping transmission risk of Lassa fever in West Africa: the importance of quality control, sampling bias, and error weighting.

    PubMed

    Peterson, A Townsend; Moses, Lina M; Bausch, Daniel G

    2014-01-01

    Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk. PMID:25105746

  14. Use of single scatter electron monte carlo transport for medical radiation sciences

    DOEpatents

    Svatos, Michelle M.

    2001-01-01

    The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.

  15. Sonochemical degradation of ethyl paraben in environmental samples: Statistically important parameters determining kinetics, by-products and pathways.

    PubMed

    Papadopoulos, Costas; Frontistis, Zacharias; Antonopoulou, Maria; Venieri, Danae; Konstantinou, Ioannis; Mantzavinos, Dionissios

    2016-07-01

    The sonochemical degradation of ethyl paraben (EP), a representative of the parabens family, was investigated. Experiments were conducted at constant ultrasound frequency of 20kHz and liquid bulk temperature of 30°C in the following range of experimental conditions: EP concentration 250-1250μg/L, ultrasound (US) density 20-60W/L, reaction time up to 120min, initial pH 3-8 and sodium persulfate 0-100mg/L, either in ultrapure water or secondary treated wastewater. A factorial design methodology was adopted to elucidate the statistically important effects and their interactions and a full empirical model comprising seventeen terms was originally developed. Omitting several terms of lower significance, a reduced model that can reliably simulate the process was finally proposed; this includes EP concentration, reaction time, power density and initial pH, as well as the interactions (EP concentration)×(US density), (EP concentration)×(pHo) and (EP concentration)×(time). Experiments at an increased EP concentration of 3.5mg/L were also performed to identify degradation by-products. LC-TOF-MS analysis revealed that EP sonochemical degradation occurs through dealkylation of the ethyl chain to form methyl paraben, while successive hydroxylation of the aromatic ring yields 4-hydroxybenzoic, 2,4-dihydroxybenzoic and 3,4-dihydroxybenzoic acids. By-products are less toxic to bacterium V. fischeri than the parent compound. PMID:26964924

  16. Virulence Characterisation of Salmonella enterica Isolates of Differing Antimicrobial Resistance Recovered from UK Livestock and Imported Meat Samples

    PubMed Central

    Card, Roderick; Vaughan, Kelly; Bagnall, Mary; Spiropoulos, John; Cooley, William; Strickland, Tony; Davies, Rob; Anjum, Muna F.

    2016-01-01

    Salmonella enterica is a foodborne zoonotic pathogen of significant public health concern. We have characterized the virulence and antimicrobial resistance gene content of 95 Salmonella isolates from 11 serovars by DNA microarray recovered from UK livestock or imported meat. Genes encoding resistance to sulphonamides (sul1, sul2), tetracycline [tet(A), tet(B)], streptomycin (strA, strB), aminoglycoside (aadA1, aadA2), beta-lactam (blaTEM), and trimethoprim (dfrA17) were common. Virulence gene content differed between serovars; S. Typhimurium formed two subclades based on virulence plasmid presence. Thirteen isolates were selected by their virulence profile for pathotyping using the Galleria mellonella pathogenesis model. Infection with a chicken invasive S. Enteritidis or S. Gallinarum isolate, a multidrug resistant S. Kentucky, or a S. Typhimurium DT104 isolate resulted in high mortality of the larvae; notably presence of the virulence plasmid in S. Typhimurium was not associated with increased larvae mortality. Histopathological examination showed that infection caused severe damage to the Galleria gut structure. Enumeration of intracellular bacteria in the larvae 24 h post-infection showed increases of up to 7 log above the initial inoculum and transmission electron microscopy (TEM) showed bacterial replication in the haemolymph. TEM also revealed the presence of vacuoles containing bacteria in the haemocytes, similar to Salmonella containing vacuoles observed in mammalian macrophages; although there was no evidence from our work of bacterial replication within vacuoles. This work shows that microarrays can be used for rapid virulence genotyping of S. enterica and that the Galleria animal model replicates some aspects of Salmonella infection in mammals. These procedures can be used to help inform on the pathogenicity of isolates that may be antibiotic resistant and have scope to aid the assessment of their potential public and animal health risk. PMID:27199965

  17. Virulence Characterisation of Salmonella enterica Isolates of Differing Antimicrobial Resistance Recovered from UK Livestock and Imported Meat Samples.

    PubMed

    Card, Roderick; Vaughan, Kelly; Bagnall, Mary; Spiropoulos, John; Cooley, William; Strickland, Tony; Davies, Rob; Anjum, Muna F

    2016-01-01

    Salmonella enterica is a foodborne zoonotic pathogen of significant public health concern. We have characterized the virulence and antimicrobial resistance gene content of 95 Salmonella isolates from 11 serovars by DNA microarray recovered from UK livestock or imported meat. Genes encoding resistance to sulphonamides (sul1, sul2), tetracycline [tet(A), tet(B)], streptomycin (strA, strB), aminoglycoside (aadA1, aadA2), beta-lactam (bla TEM), and trimethoprim (dfrA17) were common. Virulence gene content differed between serovars; S. Typhimurium formed two subclades based on virulence plasmid presence. Thirteen isolates were selected by their virulence profile for pathotyping using the Galleria mellonella pathogenesis model. Infection with a chicken invasive S. Enteritidis or S. Gallinarum isolate, a multidrug resistant S. Kentucky, or a S. Typhimurium DT104 isolate resulted in high mortality of the larvae; notably presence of the virulence plasmid in S. Typhimurium was not associated with increased larvae mortality. Histopathological examination showed that infection caused severe damage to the Galleria gut structure. Enumeration of intracellular bacteria in the larvae 24 h post-infection showed increases of up to 7 log above the initial inoculum and transmission electron microscopy (TEM) showed bacterial replication in the haemolymph. TEM also revealed the presence of vacuoles containing bacteria in the haemocytes, similar to Salmonella containing vacuoles observed in mammalian macrophages; although there was no evidence from our work of bacterial replication within vacuoles. This work shows that microarrays can be used for rapid virulence genotyping of S. enterica and that the Galleria animal model replicates some aspects of Salmonella infection in mammals. These procedures can be used to help inform on the pathogenicity of isolates that may be antibiotic resistant and have scope to aid the assessment of their potential public and animal health risk. PMID:27199965

  18. The D0 Monte Carlo

    SciTech Connect

    Womersley, J. . Dept. of Physics)

    1992-10-01

    The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.

  19. Observations on variational and projector Monte Carlo methods

    SciTech Connect

    Umrigar, C. J.

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  20. Multilevel sequential Monte Carlo samplers

    DOE PAGESBeta

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  1. Composite biasing in Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Baes, Maarten; Gordon, Karl D.; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-05-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the specific problems that we consider: in simulations with composite path length stretching, high accuracy results are obtained even for simulations with modest numbers of photon packages, while simulations without biasing cannot reach convergence, even with a huge number of photon packages.

  2. Multilevel Monte Carlo simulation of Coulomb collisions

    DOE PAGESBeta

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less

  3. Multilevel Monte Carlo simulation of Coulomb collisions

    SciTech Connect

    Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.

    2014-05-29

    We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.

  4. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  5. In-syringe reversed dispersive liquid-liquid microextraction for the evaluation of three important bioactive compounds of basil, tarragon and fennel in human plasma and urine samples.

    PubMed

    Barfi, Azadeh; Nazem, Habibollah; Saeidi, Iman; Peyrovi, Moazameh; Afsharzadeh, Maryam; Barfi, Behruz; Salavati, Hossein

    2016-03-20

    In the present study, an efficient and environmental friendly method (called in-syringe reversed dispersive liquid-liquid microextraction (IS-R-DLLME)) was developed to extract three important components (i.e. para-anisaldehyde, trans-anethole and its isomer estragole) simultaneously in different plant extracts (basil, fennel and tarragon), human plasma and urine samples prior their determination using high-performance liquid chromatography. The importance of choosing these plant extracts as samples is emanating from the dual roles of their bioactive compounds (trans-anethole and estragole), which can alter positively or negatively different cellular processes, and necessity to a simple and efficient method for extraction and sensitive determination of these compounds in the mentioned samples. Under the optimum conditions (including extraction solvent: 120 μL of n-octanol; dispersive solvent: 600 μL of acetone; collecting solvent: 1000 μL of acetone, sample pH 3; with no salt), limits of detection (LODs), linear dynamic ranges (LDRs) and recoveries (R) were 79-81 ng mL(-1), 0.26-6.9 μg mL(-1) and 94.1-99.9%, respectively. The obtained results showed that the IS-R-DLLME was a simple, fast and sensitive method with low level consumption of extraction solvent which provides high recovery under the optimum conditions. The present method was applied to investigate the absorption amounts of the mentioned analytes through the determination of the analytes before (in the plant extracts) and after (in the human plasma and urine samples) the consumption which can determine the toxicity levels of the analytes (on the basis of their dosages) in the extracts. PMID:26802527

  6. MonteCUBES

    SciTech Connect

    Blennow, Mattias

    2010-03-30

    We introduce the software package MonteCUBES, which is designed to easily and effectively perform Markov Chain Monte Carlo simulations for analyzing neutrino oscillation experiments. We discuss the methods used in the software as well as why we believe that it is particularly useful for simulating new physics effects.

  7. Fast Monte Carlo for radiation therapy: the PEREGRINE Project

    SciTech Connect

    Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.

    1997-11-11

    The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.

  8. The importance of a urine sample in persons intoxicated with flunitrazepam--legal issues in a forensic psychiatric case study of a serial murderer.

    PubMed

    Dåderman, Anna Maria; Strindlund, Hans; Wiklund, Nils; Fredriksen, Svend-Otto; Lidberg, Lars

    2003-10-14

    The sedative-hypnotic benzodiazepine flunitrazepam (FZ) is abused worldwide. The purpose of our study was to investigate violence and anterograde amnesia following intoxication with FZ, and how this was legally evaluated in forensic psychiatric investigations with the objective of drawing some conclusions about the importance of urine sample in a case of a suspected intoxication with FZ. The case was a 23-year-old male university student who, intoxicated with FZ (and possibly with other substances such as diazepam, amphetamines or cannabis), first stabbed an acquaintance and, 2 years later, two friends to death. The police investigation files, including video-typed interviews, the forensic psychiatric files, and also results from the forensic autopsy of the victims, were compared with the information obtained from the case. Only partial recovery from anterograde amnesia was shown during a period of several months. Some important new information is contained in this case report: a forensic analysis of blood sample instead of a urine sample, might lead to confusion during police investigation and forensic psychiatric assessment (FPA) of an FZ abuser, and in consequence wrong legal decisions. FZ, alone or combined with other substances, induces severe violence and is followed by anterograde amnesia. All cases of bizarre, unexpected aggression followed by anterograde amnesia should be assessed for abuse of FZ. A urine sample is needed in case of suspected FZ intoxication. The police need to be more aware of these issues, and they must recognise that they play a crucial role in an assessment procedure. Declaring FZ an illegal drug is strongly recommended. PMID:14550609

  9. An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals

    ERIC Educational Resources Information Center

    Verhelst, Norman D.

    2008-01-01

    Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…

  10. Wormhole Hamiltonian Monte Carlo

    PubMed Central

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak

    2015-01-01

    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551