NASA Astrophysics Data System (ADS)
Lima, Ivan T., Jr.; Kalra, Anshul; Hernández-Figueroa, Hugo E.; Sherif, Sherif S.
2012-03-01
Computer simulations of light transport in multi-layered turbid media are an effective way to theoretically investigate light transport in tissue, which can be applied to the analysis, design and optimization of optical coherence tomography (OCT) systems. We present a computationally efficient method to calculate the diffuse reflectance due to ballistic and quasi-ballistic components of photons scattered in turbid media, which represents the signal in optical coherence tomography systems. Our importance sampling based Monte Carlo method enables the calculation of the OCT signal with less than one hundredth of the computational time required by the conventional Monte Carlo method. It also does not produce a systematic bias in the statistical result that is typically observed in existing methods to speed up Monte Carlo simulations of light transport in tissue. This method can be used to assess and optimize the performance of existing OCT systems, and it can also be used to design novel OCT systems.
Periyasamy, Vijitha; Pramanik, Manojit
2016-04-10
Monte Carlo simulation for light propagation in biological tissue is widely used to study light-tissue interaction. Simulation for optical coherence tomography (OCT) studies requires handling of embedded objects of various shapes. In this work, time-domain OCT simulations for multilayered tissue with embedded objects (such as sphere, cylinder, ellipsoid, and cuboid) was done. Improved importance sampling (IS) was implemented for the proposed OCT simulation for faster speed. At first, IS was validated against standard and angular biased Monte Carlo methods for OCT. Both class I and class II photons were in agreement in all the three methods. However, the IS method had more than tenfold improvement in terms of simulation time. Next, B-scan images were obtained for four types of embedded objects. All the four shapes are clearly visible from the B-scan OCT images. With the improved IS B-scan OCT images of embedded objects can be obtained with reasonable simulation time using a standard desktop computer. User-friendly, C-based, Monte Carlo simulation for tissue layers with embedded objects for OCT (MCEO-OCT) will be very useful for time-domain OCT simulations in many biological applications.
NASA Astrophysics Data System (ADS)
Rafiee, Mohammad; Barrau, Axel; Bayen, Alexandre M.
2013-06-01
This article investigates the performance of Monte Carlo-based estimation methods for estimation of flow state in large-scale open channel networks. After constructing a state space model of the flow based on the Saint-Venant equations, we implement the optimal sampling importance resampling filter to perform state estimation in a case in which measurements are available at every time step. Considering a case in which measurements become available intermittently, a random-map implementation of the implicit particle filter is applied to estimate the state trajectory in the interval between the measurements. Finally, some heuristics are proposed, which are shown to improve the estimation results and lower the computational cost. In the first heuristics, considering the case in which measurements are available at every time step, we apply the implicit particle filter over time intervals of a desired size while incorporating all the available measurements over the corresponding time interval. As a second heuristic method, we introduce a maximum a posteriori (MAP) method, which does not require sampling. It will be seen, through implementation, that the MAP method provides more accurate results in the case of our application while having a smaller computational cost. All estimation methods are tested on a network of 19 tidally forced subchannels and 1 reservoir, Clifton Court Forebay, in Sacramento-San Joaquin Delta in California, and numerical results are presented.
CosmoPMC: Cosmology sampling with Population Monte Carlo
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Benabed, Karim; Cappé, Olivier; Coupon, Jean; Cardoso, Jean-François; Fort, Gersende; McCracken, Henry Joy; Prunet, Simon; Robert, Christian P.; Wraith, Darren
2012-12-01
CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.
A pure-sampling quantum Monte Carlo algorithm.
Ospadov, Egor; Rothstein, Stuart M
2015-01-14
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
A pure-sampling quantum Monte Carlo algorithm
Ospadov, Egor; Rothstein, Stuart M.
2015-01-14
The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.
Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations
Soran, P.D.; McKeon, D.C.; Booth, T.E.; Schlumberger Well Services, Houston, TX; Los Alamos National Lab., NM )
1989-07-01
Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab.
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures.
Annealed Importance Sampling for Neural Mass Models
Penny, Will; Sengupta, Biswa
2016-01-01
Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Neutrino oscillation parameter sampling with MonteCUBES
NASA Astrophysics Data System (ADS)
Blennow, Mattias; Fernandez-Martinez, Enrique
2010-01-01
We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
Instantaneous GNSS attitude determination: A Monte Carlo sampling approach
NASA Astrophysics Data System (ADS)
Sun, Xiucong; Han, Chao; Chen, Pei
2017-04-01
A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH3 to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space formore » which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.« less
CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc
Buckley, Lesley A.; Kawrakow, I.; Rogers, D.W.O.
2004-12-01
CSnrc, a new user-code for the EGSnrc Monte Carlo system is described. This user-code improves the efficiency when calculating ratios of doses from similar geometries. It uses a correlated sampling variance reduction technique. CSnrc is developed from an existing EGSnrc user-code CAVRZnrc and improves upon the correlated sampling algorithm used in an earlier version of the code written for the EGS4 Monte Carlo system. Improvements over the EGS4 version of the algorithm avoid repetition of sections of particle tracks. The new code includes a rectangular phantom geometry not available in other EGSnrc cylindrical codes. Comparison to CAVRZnrc shows gains in efficiency of up to a factor of 64 for a variety of test geometries when computing the ratio of doses to the cavity for two geometries. CSnrc is well suited to in-phantom calculations and is used to calculate the central electrode correction factor P{sub cel} in high-energy photon and electron beams. Current dosimetry protocols base the value of P{sub cel} on earlier Monte Carlo calculations. The current CSnrc calculations achieve 0.02% statistical uncertainties on P{sub cel}, much lower than those previously published. The current values of P{sub cel} compare well with the values used in dosimetry protocols for photon beams. For electrons beams, CSnrc calculations are reported at the reference depth used in recent protocols and show up to a 0.2% correction for a graphite electrode, a correction currently ignored by dosimetry protocols. The calculations show that for a 1 mm diameter aluminum central electrode, the correction factor differs somewhat from the values used in both the IAEA TRS-398 code of practice and the AAPM's TG-51 protocol.
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Monte Carlo Studies of Sampling Strategies for Estimating Tributary Loads
NASA Astrophysics Data System (ADS)
Richards, R. Peter; Holloway, Jim
1987-10-01
Monte Carlo techniques were used to evaluate the accuracy and precision of tributary load estimates, as these are affected by sampling frequency and pattern, calculation method, watershed size, and parameter behavior during storm runoff events. Simulated years consisting of 1460 observations were chosen at random with replacement from data sets of more than 4000 samples. Patterned subsampling of these simulated years produced data appropriate to each sampling frequency and pattern, from which load estimates were calculated. Thus results for all sampling strategies were based on the same series of simulated years. Sampling frequencies ranged from 12 to roughly 600 samples per year. Unstratified and flow-stratified sampling were examined, and loads were calculated with and without the use of the Beale Ratio Estimator. All loads were evaluated by comparison with loads calculated from all 1460 samples in the simulated year. Studies consisting of 1000 iterations were repeated twice for each of five parameters in each of three watersheds. The results show that bias and precision of loading estimates are affected not only by the frequency and pattern of sampling and the calculation approach used, but also by the watershed size and the behavior of the chemical species being monitored. Furthermore, considerable interaction exists between these factors. In every case, loads based on flow-stratified sampling and calculated using the Beale ratio estimator provided the best results among the strategies examined. Differences in bias and precision among watersheds and among transported materials are related to the variability of instantaneous fluxes in the systems being monitored. These differences are qualitatively predictable from knowledge of the time behavior of the material and hydrological systems involved. Attempts to derive quantitative relationships to predict the sampling effort required to achieve a specified level of precision have not been successful.
Monte Carlo studies of sampling strategies for estimating tributary loads
Richards, R.P.; Holloway, J.
1987-10-01
Monte Carlo techniques were used to evaluate the accuracy and precision of tributary load estimates, as these are affected by sampling frequency and pattern, calculation method, watershed size, and parameter behavior during storm runoff events. Simulated years consisting of 1460 observations were chosen at random with replacement from data sets of more than 4000 samples. Patterned subsampling of these simulated years produced data appropriate to each sampling frequency and pattern, from which load estimates were calculated. Thus, results for all sampling strategies were based on the same series of simulate years. Sampling frequencies ranged from 12 to roughly 600 samples per year. Unstratified and flow-stratified sampling were examined, and loads were calculated with and without the use of the Beale Ratio Estimator. All loads were evaluated by comparison with loads calculated from all 1460 samples in the simulated year. Studies consisting of 1000 iterations were repeated twice for each of five parameters in each of three watersheds. The results show that bias and precision of loading estimates are affected not only by the frequency and pattern of sampling and the calculation approach used, but also by the watershed size and the behavior of the chemical species being monitored. Furthermore, considerable interaction exists between these factors. In every case, loads based on flow-stratified sampling and calculated using the Beale ratio estimator provided the best results among the strategies examined. Differences in bias and precision among watersheds and among transported materials are related to the variability of instantaneous fluxes in the systems being monitored. These differences are qualitatively predictable from knowledge of the time behavior of the material and hydrological systems involved.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle
Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2012-08-01
For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
The Importance of Microhabitat for Biodiversity Sampling
Mehrabi, Zia; Slade, Eleanor M.; Solis, Angel; Mann, Darren J.
2014-01-01
Responses to microhabitat are often neglected when ecologists sample animal indicator groups. Microhabitats may be particularly influential in non-passive biodiversity sampling methods, such as baited traps or light traps, and for certain taxonomic groups which respond to fine scale environmental variation, such as insects. Here we test the effects of microhabitat on measures of species diversity, guild structure and biomass of dung beetles, a widely used ecological indicator taxon. We demonstrate that choice of trap placement influences dung beetle functional guild structure and species diversity. We found that locally measured environmental variables were unable to fully explain trap-based differences in species diversity metrics or microhabitat specialism of functional guilds. To compare the effects of habitat degradation on biodiversity across multiple sites, sampling protocols must be standardized and scale-relevant. Our work highlights the importance of considering microhabitat scale responses of indicator taxa and designing robust sampling protocols which account for variation in microhabitats during trap placement. We suggest that this can be achieved either through standardization of microhabitat or through better efforts to record relevant environmental variables that can be incorporated into analyses to account for microhabitat effects. This is especially important when rapidly assessing the consequences of human activity on biodiversity loss and associated ecosystem function and services. PMID:25469770
Semantic Importance Sampling for Statistical Model Checking
2014-10-18
we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with rare events. Our results indicate that SIS reduces...background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In Section 6, we present our experiments and results...Syntactic Extraction ∗( ) dReal + Refinement ∗ |∗| , Monte-Carlo , Fig. 5. Architecture of osmosis
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando; Hernandez-Ortiz, Juan P.; de Pablo, Juan J.
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P.; Pablo, Juan J. de
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
Armas-Pérez, Julio C; Londono-Hurtado, Alejandro; Guzmán, Orlando; Hernández-Ortiz, Juan P; de Pablo, Juan J
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
NASA Astrophysics Data System (ADS)
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
Cavity-Bias Sampling in Reaction Ensemble Monte Carlo Simulation
2006-09-01
biased Monte Carlo algorithm, we wish to generate configurations in a biasing manner thus making ðo ! nÞ 6¼ ðn ! oÞ. It is clear from equation (4), that...configuration ðo ! nÞ ¼ f ½UðnÞ ð5Þ and while for the reverse move ðn ! oÞ ¼ f ½UðoÞ: ð6Þ From equation (4) then accðo ! nÞ accðn ! oÞ ¼ f UðoÞ½ f UðnÞ...vibrational, rotational, and electronic; i is the thermal de Broglie wavelength of species i; and V is the total volume of the system [1, 2]. Equation (9) is
Importance of sampling frequency when collecting diatoms
Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola
2016-01-01
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency. PMID:27841310
Importance of sampling frequency when collecting diatoms
NASA Astrophysics Data System (ADS)
Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola
2016-11-01
There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.
Ensemble bayesian model averaging using markov chain Monte Carlo sampling
Vrugt, Jasper A; Diks, Cees G H; Clark, Martyn P
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.
Monte Carlo path sampling approach to modeling aeolian sediment transport
NASA Astrophysics Data System (ADS)
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-02-21
A stochastic model of the processes involved in the measurement of the activity of the (222)Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the (222)Rn decay products concentrations in the air are realistically evaluated.
Automatic variance reduction for Monte Carlo simulations via the local importance function transform
Turner, S.A.
1996-02-01
The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.
Liang, Faming; Jin, Ick-Hoon
2013-08-01
Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.
Azbouche, Ahmed; Belgaid, Mohamed; Mazrou, Hakim
2015-08-01
A fully detailed Monte Carlo geometrical model of a High Purity Germanium detector with a (152)Eu source, packed in Marinelli beaker, was developed for routine analysis of large volume environmental samples. Then, the model parameters, in particular, the dead layer thickness were adjusted thanks to a specific irradiation configuration together with a fine-tuning procedure. Thereafter, the calculated efficiencies were compared to the measured ones for standard samples containing (152)Eu source filled in both grass and resin matrices packed in Marinelli beaker. From this comparison, a good agreement between experiment and Monte Carlo calculation results was obtained highlighting thereby the consistency of the geometrical computational model proposed in this work. Finally, the computational model was applied successfully to determine the (137)Cs distribution in soil matrix. From this application, instructive results were achieved highlighting, in particular, the erosion and accumulation zone of the studied site.
Fast sampling in the slow manifold: The momentum-enhanced hybrid Monte Carlo method
NASA Astrophysics Data System (ADS)
Andricioaei, Ioan
2005-03-01
We will present a novel dynamic algorithm, the MEHMC method, which enhances sampling and at the same time yielding correct Boltzmann weighted statistical distributions. The gist of the MEHMC method is to use momentum averaging to identify the slow manifold and bias along this manifold the Maxwell distribution of momenta usually employed in Hybrid Monte Carlo. Several tests and applications are to exemplify the method.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2016-10-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2017-01-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Modeling N2O Emissions From Temperate Agroecosystems: A Literature Review Using Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Tonitto, C.
2006-12-01
In this work, we model annual N2O flux based on field experiments in temperate agroecosystems reported in the literature. Understanding potential N2O flux as a consequence of ecosystem management is important for mitigating global change. While loss of excess N as N2 has no environmental consequences, loss as N2O contributes to the greenhouse effect; over a 100 year time horizon N2O has 310 times the global warming potential (GWP) of CO2. Nitrogen trace gas flux remains difficult to accurately quantify under field conditions due to temporal and spatial limitations of sampling. Trace gas measurement techniques often rely on small chambers sampled at regular intervals. This measurement scheme can undersample stochastic events, such as high precipitation, which correspond to periods of high N trace gas flux. We apply Monte Carlo sampling of field measurements to project N2O losses under different crops and soil textures. Three statistical models are compared: 1) annual N2O flux as a function of process rates derived from temporally aggregated field observations, 2) annual N2O flux incorporating the probability of precipitation events, and 3) annual N2O flux as a function of crop growth. Using the temporally aggregated model, predicted annual N2O flux was highest for corn and wheat, which receive higher fertilizer inputs relative to barley and ryegrass. Within a cropping system, clayey soil textures resulted in the highest N2O flux. The incorporation of precipitation events in the model has the greatest effect on clayey soils. Relative to the aggregated model the inclusion of precipitation events changed predicted mean annual N2O flux from 31 to 49 kg N ha-1 for corn grown on clay loam and shifted the 75% confidence interval (CI) from 20-42 to 38-61 kg N ha-1. In contrast, comparisons between the aggregated and precipitation event models resulted in indistinguishable predictions of mean annual N2O loss for corn grown on silty loam and loam soils. Similarly, application
Sequential Importance Sampling for Rare Event Estimation with Computer Experiments
Williams, Brian J.; Picard, Richard R.
2012-06-25
Importance sampling often drastically improves the variance of percentile and quantile estimators of rare events. We propose a sequential strategy for iterative refinement of importance distributions for sampling uncertain inputs to a computer model to estimate quantiles of model output or the probability that the model output exceeds a fixed or random threshold. A framework is introduced for updating a model surrogate to maximize its predictive capability for rare event estimation with sequential importance sampling. Examples of the proposed methodology involving materials strength and nuclear reactor applications will be presented. The conclusions are: (1) Importance sampling improves UQ of percentile and quantile estimates relative to brute force approach; (2) Benefits of importance sampling increase as percentiles become more extreme; (3) Iterative refinement improves importance distributions in relatively few iterations; (4) Surrogates are necessary for slow running codes; (5) Sequential design improves surrogate quality in region of parameter space indicated by importance distributions; and (6) Importance distributions and VRFs stabilize quickly, while quantile estimates may converge slowly.
NASA Astrophysics Data System (ADS)
Holmes, Jesse Curtis
established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus; Vogel, Thomas; Landau, David P
2015-01-01
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Landau, D. P.; Tsai, Shan-Ho; Exler, M.
2004-10-01
We describe a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way. Instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states. The probability can be computed at any temperature by weighting the density of states by the appropriate Boltzmann factor. Thermodynamic properties can be determined from suitable derivatives of the partition function and, unlike "standard" methods, the free energy and entropy can also be computed directly. To demonstrate the simplicity and power of the algorithm, we apply it to models exhibiting first-order or second-order phase transitions.
9 CFR 327.11 - Receipts to importers for import product samples.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Receipts to importers for import product samples. 327.11 Section 327.11 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... AND VOLUNTARY INSPECTION AND CERTIFICATION IMPORTED PRODUCTS § 327.11 Receipts to importers for...
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
Markov chain Monte Carlo sampling of gene genealogies conditional on unphased SNP genotype data.
Burkett, Kelly M; McNeney, Brad; Graham, Jinko
2013-10-01
The gene genealogy is a tree describing the ancestral relationships among genes sampled from unrelated individuals. Knowledge of the tree is useful for inference of population-genetic parameters and has potential application in gene-mapping. Markov chain Monte Carlo approaches that sample genealogies conditional on observed genetic data typically assume that haplotype data are observed even though commonly-used genotyping technologies provide only unphased genotype data. We have extended our haplotype-based genealogy sampler, sampletrees, to handle unphased genotype data. We use the sampled haplotype configurations as a diagnostic for adequate sampling of the tree space based on the reasoning that if haplotype sampling is restricted, sampling from the tree space will also be restricted. We compare the distributions of sampled haplotypes across multiple runs of sampletrees, and to those estimated by the phase inference program, PHASE. Performance was excellent for the majority of individuals as shown by the consistency of results across multiple runs. However, for some individuals in some datasets, sampletrees had problems sampling haplotype configurations; longer run lengths would be required for these datasets. For many datasets though, we expect that sampletrees will be useful for sampling from the posterior distribution of gene genealogies given unphased genotype data.
Reactive Monte Carlo sampling with an ab initio potential
Leiding, Jeff; Coe, Joshua D.
2016-05-04
Here, we present the first application of reactive Monte Carlo in a first-principles context. The algorithm samples in a modified NVT ensemble in which the volume, temperature, and total number of atoms of a given type are held fixed, but molecular composition is allowed to evolve through stochastic variation of chemical connectivity. We also discuss general features of the method, as well as techniques needed to enhance the efficiency of Boltzmann sampling. Finally, we compare the results of simulation of NH_{3} to those of ab initio molecular dynamics (AIMD). Furthermore, we find that there are regions of state space for which RxMC sampling is much more efficient than AIMD due to the “rare-event” character of chemical reactions.
Multiscale Monte Carlo Sampling of Protein Sidechains: Application to Binding Pocket Flexibility
Nilmeier, Jerome; Jacobson, Matt
2008-01-01
We present a Monte Carlo sidechain sampling procedure and apply it to assessing the flexibility of protein binding pockets. We implemented a multiple “time step” Monte Carlo algorithm to optimize sidechain sampling with a surface generalized Born implicit solvent model. In this approach, certain forces (those due to long-range electrostatics and the implicit solvent model) are updated infrequently, in “outer steps”, while short-range forces (covalent, local nonbonded interactions) are updated at every “inner step”. Two multistep protocols were studied. The first protocol rigorously obeys detailed balance, and the second protocol introduces an approximation to the solvation term that increases the acceptance ratio. The first protocol gives a 10-fold improvement over a protocol that does not use multiple time steps, while the second protocol generates comparable ensembles and gives a 15-fold improvement. A range of 50–200 inner steps per outer step was found to give optimal performance for both protocols. The resultant method is a practical means to assess sidechain flexibility in ligand binding pockets, as we illustrate with proof-of-principle calculations on six proteins: DB3 antibody, thermolysin, estrogen receptor, PPAR-γ, PI3 kinase, and CDK2. The resulting sidechain ensembles of the apo binding sites correlate well with known induced fit conformational changes and provide insights into binding pocket flexibility. PMID:19119325
Gil, Victor A; Lecina, Daniel; Grebner, Christoph; Guallar, Victor
2016-10-15
Normal mode methods are becoming a popular alternative to sample the conformational landscape of proteins. In this study, we describe the implementation of an internal coordinate normal mode analysis method and its application in exploring protein flexibility by using the Monte Carlo method PELE. This new method alternates two different stages, a perturbation of the backbone through the application of torsional normal modes, and a resampling of the side chains. We have evaluated the new approach using two test systems, ubiquitin and c-Src kinase, and the differences to the original ANM method are assessed by comparing both results to reference molecular dynamics simulations. The results suggest that the sampled phase space in the internal coordinate approach is closer to the molecular dynamics phase space than the one coming from a Cartesian coordinate anisotropic network model. In addition, the new method shows a great speedup (∼5-7×), making it a good candidate for future normal mode implementations in Monte Carlo methods.
Improved algorithms and coupled neutron-photon transport for auto-importance sampling method
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Jun-Li; Wu, Zhen; Qiu, Rui; Li, Chun-Yan; Liang, Man-Chun; Zhang, Hui; Gang, Zhi; Xu, Hong
2017-01-01
The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents improved algorithms for the AIS method, including particle transport, fictitious particle creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applied to complicated deep penetration problems with complex geometry and multiple materials. A Completely coupled Neutron-Photon Auto-Importance Sampling (CNP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of CNP-AIS, geometry splitting with Russian roulette and analog Monte Carlo, respectively. The calculation results of CNP-AIS are in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of CNP-AIS for both neutron and photon is much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo. Supported by the subject of National Science and Technology Major Project of China (2013ZX06002001-007, 2011ZX06004-007) and National Natural Science Foundation of China (11275110, 11375103)
Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.
Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M
2016-07-14
Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.
Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS
Granroth, Garrett E; Chen, Meili; Kohl, James Arthur; Hagen, Mark E; Cobb, John W
2007-01-01
Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersive sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.
Schumaker, Mark F; Kramer, David M
2011-09-01
We have programmed a Monte Carlo simulation of the Q-cycle model of electron transport in cytochrome b(6)f complex, an enzyme in the photosynthetic pathway that converts sunlight into biologically useful forms of chemical energy. Results were compared with published experiments of Kramer and Crofts (Biochim. Biophys. Acta 1183:72-84, 1993). Rates for the simulation were optimized by constructing large numbers of parameter sets using Latin hypercube sampling and selecting those that gave the minimum mean square deviation from experiment. Multiple copies of the simulation program were run in parallel on a Beowulf cluster. We found that Latin hypercube sampling works well as a method for approximately optimizing very noisy objective functions of 15 or 22 variables. Further, the simplified Q-cycle model can reproduce experimental results in the presence or absence of a quinone reductase (Q(i)) site inhibitor without invoking ad hoc side-reactions.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Accelerated nonrigid intensity-based image registration using importance sampling.
Bhagalia, Roshni; Fessler, Jeffrey A; Kim, Boklye
2009-08-01
Nonrigid image registration methods using intensity-based similarity metrics are becoming increasingly common tools to estimate many types of deformations. Nonrigid warps can be very flexible with a large number of parameters and gradient optimization schemes are widely used to estimate them. However, for large datasets, the computation of the gradient of the similarity metric with respect to these many parameters becomes very time consuming. Using a small random subset of image voxels to approximate the gradient can reduce computation time. This work focuses on the use of importance sampling to reduce the variance of this gradient approximation. The proposed importance sampling framework is based on an edge-dependent adaptive sampling distribution designed for use with intensity-based registration algorithms. We compare the performance of registration based on stochastic approximations with and without importance sampling to that using deterministic gradient descent. Empirical results, on simulated magnetic resonance brain data and real computed tomography inhale-exhale lung data from eight subjects, show that a combination of stochastic approximation methods and importance sampling accelerates the registration process while preserving accuracy.
Hoti, Fabian J; Sillanpää, Mikko J; Holmström, Lasse
2002-04-01
We provide an overview of the use of kernel smoothing to summarize the quantitative trait locus posterior distribution from a Markov chain Monte Carlo sample. More traditional distributional summary statistics based on the histogram depend both on the bin width and on the sideway shift of the bin grid used. These factors influence both the overall mapping accuracy and the estimated location of the mode of the distribution. Replacing the histogram by kernel smoothing helps to alleviate these problems. Using simulated data, we performed numerical comparisons between the two approaches. The results clearly illustrate the superiority of the kernel method. The kernel approach is particularly efficient when one needs to point out the best putative quantitative trait locus position on the marker map. In such situations, the smoothness of the posterior estimate is especially important because rough posterior estimates easily produce biased mode estimates. Different kernel implementations are available from Rolf Nevanlinna Institute's web page (http://www.rni.helsinki.fi/;fjh).
Computing ensembles of transitions from stable states: Dynamic importance sampling.
Perilla, Juan R; Beckstein, Oliver; Denning, Elizabeth J; Woolf, Thomas B
2011-01-30
There is an increasing dataset of solved biomolecular structures in more than one conformation and increasing evidence that large-scale conformational change is critical for biomolecular function. In this article, we present our implementation of a dynamic importance sampling (DIMS) algorithm that is directed toward improving our understanding of important intermediate states between experimentally defined starting and ending points. This complements traditional molecular dynamics methods where most of the sampling time is spent in the stable free energy wells defined by these initial and final points. As such, the algorithm creates a candidate set of transitions that provide insights for the much slower and probably most important, functionally relevant degrees of freedom. The method is implemented in the program CHARMM and is tested on six systems of growing size and complexity. These systems, the folding of Protein A and of Protein G, the conformational changes in the calcium sensor S100A6, the glucose-galactose-binding protein, maltodextrin, and lactoferrin, are also compared against other approaches that have been suggested in the literature. The results suggest good sampling on a diverse set of intermediates for all six systems with an ability to control the bias and thus to sample distributions of trajectories for the analysis of intermediate states.
NASA Astrophysics Data System (ADS)
Baba, J. S.; Koju, V.; John, D.
2015-03-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>107) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al., to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Baba, Justin S; John, Dwayne O; Koju, Vijay
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Monte Carlo non-local means: random sampling for large-scale image filtering.
Chan, Stanley H; Zickler, Todd; Lu, Yue M
2014-08-01
We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Performance evaluation of an importance sampling technique in a Jackson network
NASA Astrophysics Data System (ADS)
brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed
2014-03-01
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.
Muhammad, Wazir; Lee, Sang Hoon
2013-01-01
Detailed comparisons of the predictions of the Relativistic Form Factors (RFFs) and Modified Form Factors (MFFs) and their advantages and shortcomings in calculating elastic scattering cross sections can be found in the literature. However, the issues related to their implementation in the Monte Carlo (MC) sampling for coherently scattered photons is still under discussion. Secondly, the linear interpolation technique (LIT) is a popular method to draw the integrated values of squared RFFs/MFFs (i.e. A(Z, v(i)²)) over squared momentum transfer (v(i)² = v(1)²,......, v(59)²). In the current study, the role/issues of RFFs/MFFs and LIT in the MC sampling for the coherent scattering were analyzed. The results showed that the relative probability density curves sampled on the basis of MFFs are unable to reveal any extra scientific information as both the RFFs and MFFs produced the same MC sampled curves. Furthermore, no relationship was established between the multiple small peaks and irregular step shapes (i.e. statistical noise) in the PDFs and either RFFs or MFFs. In fact, the noise in the PDFs appeared due to the use of LIT. The density of the noise depends upon the interval length between two consecutive points in the input data table of A(Z, v(i)²) and has no scientific background. The probability density function curves became smoother as the interval lengths were decreased. In conclusion, these statistical noises can be efficiently removed by introducing more data points in the A(Z, v(i)²) data tables.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Petaccia, M; Segui, S; Castellano, G
2016-11-01
Fluorescence enhancement in samples irradiated in a scanning electron microscope or an electron microprobe should be appropriately assessed in order not to distort quantitative analyses. Several models have been proposed to take into account this effect and current quantification routines are based on them, many of which have been developed under the assumption that bremsstrahlung fluorescence correction is negligible when compared to characteristic enhancement; however, no concluding arguments have been provided in order to support this assumption. As detectors are unable to discriminate primary from secondary characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, bremsstrahlung fluorescence enhancement in electron probe microanalysis has been studied by using the interaction forcing routine offered by penelope 2008 as a variance reduction alternative. The developed software allowed us to show that bremsstrahlung and characteristic fluorescence corrections are in fact comparable in the studied cases. As an extra result, the interaction forcing approach appears as a most efficient method, not only in the computation of the continuum enhancement but also for the assessment of the characteristic fluorescence correction.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Kanick, S C; Robinson, D J; Sterenborg, H J C M; Amelink, A
2009-11-21
Single fiber reflectance spectroscopy is a method to noninvasively quantitate tissue absorption and scattering properties. This study utilizes a Monte Carlo (MC) model to investigate the effect that optical properties have on the propagation of photons that are collected during the single fiber reflectance measurement. MC model estimates of the single fiber photon path length (L(SF)) show excellent agreement with experimental measurements and predictions of a mathematical model over a wide range of optical properties and fiber diameters. Simulation results show that L(SF) is unaffected by changes in anisotropy (g epsilon [0.8, 0.9, 0.95]), but is sensitive to changes in phase function (Henyey-Greenstein versus modified Henyey-Greenstein). A 20% decrease in L(SF) was observed for the modified Henyey-Greenstein compared with the Henyey-Greenstein phase function; an effect that is independent of optical properties and fiber diameter and is approximated with a simple linear offset. The MC model also returns depth-resolved absorption profiles that are used to estimate the mean sampling depth (Z(SF)) of the single fiber reflectance measurement. Simulated data are used to define a novel mathematical expression for Z(SF) that is expressed in terms of optical properties, fiber diameter and L(SF). The model of sampling depth indicates that the single fiber reflectance measurement is dominated by shallow scattering events, even for large fibers; a result that suggests that the utility of single fiber reflectance measurements of tissue in vivo will be in the quantification of the optical properties of superficial tissues.
An importance sampling algorithm for estimating extremes of perpetuity sequences
NASA Astrophysics Data System (ADS)
Collamore, Jeffrey F.
2012-09-01
In a wide class of problems in insurance and financial mathematics, it is of interest to study the extremal events of a perpetuity sequence. This paper addresses the problem of numerically evaluating these rare event probabilities. Specifically, an importance sampling algorithm is described which is efficient in the sense that it exhibits bounded relative error, and which is optimal in an appropriate asymptotic sense. The main idea of the algorithm is to use a "dual" change of measure, which is employed to an associated Markov chain over a randomly-stopped time interval. The algorithm also makes use of the so-called forward sequences generated to the given stochastic recursion, together with elements of Markov chain theory.
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
19 CFR 151.67 - Sampling by importer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.67 Sampling by... quantities from the packages of wool or hair designated for examination, provided the bales or bags...
Monte Carlo entropic sampling applied to Ising-like model for 2D and 3D systems
NASA Astrophysics Data System (ADS)
Jureschi, C. M.; Linares, J.; Dahoo, P. R.; Alayli, Y.
2016-08-01
In this paper we present the Monte Carlo entropic sampling (MCES) applied to an Ising-like model for 2D and 3D system in order to show the interaction influence of the edge molecules of the system with their local environment. We show that, as for the 1D and the 2D spin crossover (SCO) systems, the origin of multi steps transition in 3D SCO is the effect of the edge interaction molecules with its local environment together with short and long range interactions. Another important result worth noting is the co-existence of step transitions with hysteresis and without hysteresis. By increasing the value of the edge interaction, L, the transition is shifted to the lower temperatures: it means that the role of edge interaction is equivalent to an applied negative pressure because the edge interaction favours the HS state while the applied pressure favours the LS state. We also analyse, in this contribution, the role of the short- and long-range interaction, J respectively G, with respect to the environment interaction, L.
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Binder, Kurt
2012-01-01
Semiflexible macromolecules in dilute solution under very good solvent conditions are modeled by self-avoiding walks on the simple cubic lattice (d = 3 dimensions) and square lattice (d = 2 dimensions), varying chain stiffness by an energy penalty ɛb for chain bending. In the absence of excluded volume interactions, the persistence length ℓp of the polymers would then simply be ℓ _p=ℓ _b(2d-2)^{-1}q_b^{-1} with qb = exp (-ɛb/kBT), the bond length ℓb being the lattice spacing, and kBT is the thermal energy. Using Monte Carlo simulations applying the pruned-enriched Rosenbluth method (PERM), both qb and the chain length N are varied over a wide range (0.005 ⩽ qb ⩽ 1, N ⩽ 50 000), and also a stretching force f is applied to one chain end (fixing the other end at the origin). In the absence of this force, in d = 2 a single crossover from rod-like behavior (for contour lengths less than ℓp) to swollen coils occurs, invalidating the Kratky-Porod model, while in d = 3 a double crossover occurs, from rods to Gaussian coils (as implied by the Kratky-Porod model) and then to coils that are swollen due to the excluded volume interaction. If the stretching force is applied, excluded volume interactions matter for the force versus extension relation irrespective of chain stiffness in d = 2, while theories based on the Kratky-Porod model are found to work in d = 3 for stiff chains in an intermediate regime of chain extensions. While for qb ≪ 1 in this model a persistence length can be estimated from the initial decay of bond-orientational correlations, it is argued that this is not possible for more complex wormlike chains (e.g., bottle-brush polymers). Consequences for the proper interpretation of experiments are briefly discussed.
Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D
2009-01-01
An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Perez, Danny; Junghans, Christoph
2014-03-01
We show direct formal relationships between the Wang-Landau iteration [PRL 86, 2050 (2001)], metadynamics [PNAS 99, 12562 (2002)] and statistical temperature molecular dynamics [PRL 97, 050601 (2006)], the major Monte Carlo and molecular dynamics work horses for sampling from a generalized, multicanonical ensemble. We aim at helping to consolidate the developments in the different areas by indicating how methodological advancements can be transferred in a straightforward way, avoiding the parallel, largely independent, developments tracks observed in the past.
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Williams, Michael S; Ebel, Eric D
2014-11-18
The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
Code of Federal Regulations, 2014 CFR
2014-07-01
... requirements for importers who import gasoline into the United States by truck. 80.1349 Section 80.1349... FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1349 Alternative sampling and testing requirements for importers who import gasoline into the United States...
Monte Carlo simulation of a beta particle detector for food samples.
Sato, Y; Takahashi, H; Yamada, T; Unno, Y; Yunoki, A
2013-11-01
The accident at the Fukushima Daiichi Nuclear Power Plant in March 2011 released radionuclides into the environment. There is concern that (90)Sr will be concentrated in seafood. To measure the activities of (90)Sr in a short time without chemical processes, we have designed a new detector for measuring activity that obtains count rates using 10 layers of proportional counters that are separated by walls that absorb beta particles. Monte Carlo simulations were performed to confirm that its design is appropriate.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
Sperandio, Olivier; Souaille, Marc; Delfaud, François; Miteva, Maria A; Villoutreix, Bruno O
2009-04-01
Obtaining an efficient sampling of the low to medium energy regions of a ligand conformational space is of primary importance for getting insight into relevant binding modes of drug candidates, or for the screening of rigid molecular entities on the basis of a predefined pharmacophore or for rigid body docking. Here, we report the development of a new computer tool that samples the conformational space by using the Metropolis Monte Carlo algorithm combined with the MMFF94 van der Waals energy term. The performances of the program have been assessed on 86 drug-like molecules that resulted from an ADME/tox profiling applied on cocrystalized small molecules and were compared with the program Omega on the same dataset. Our program has also been assessed on the 85 molecules of the Astex diverse set. Both test sets show convincing performance of our program at sampling the conformational space.
Kurtz, R.J.; Heasler, P.G.; Baird, D.B.
1994-02-01
This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... refiners, gasoline importers and producers and importers of certified ethanol denaturant. 80.1630 Section... refiners, gasoline importers and producers and importers of certified ethanol denaturant. (a) Sample and test each batch of gasoline and certified ethanol denaturant. (1) Refiners and importers shall...
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-08-21
An optimized variant of the nested Markov chain Monte Carlo [n(MC)(2)] method [J. Chem. Phys. 130, 164104 (2009)] is applied to fluid N(2). In this implementation of n(MC)(2), isothermal-isobaric (NPT) ensemble sampling on the basis of a pair potential (the "reference" system) is used to enhance the efficiency of sampling based on Perdew-Burke-Ernzerhof density functional theory with a 6-31G(*) basis set (PBE6-31G(*), the "full" system). A long sequence of Monte Carlo steps taken in the reference system is converted into a trial step taken in the full system; for a good choice of reference potential, these trial steps have a high probability of acceptance. Using decorrelated samples drawn from the reference distribution, the pressure and temperature of the full system are varied such that its distribution overlaps maximally with that of the reference system. Optimized pressures and temperatures then serve as input parameters for n(MC)(2) sampling of dense fluid N(2) over a wide range of thermodynamic conditions. The simulation results are combined to construct the Hugoniot of nitrogen fluid, yielding predictions in excellent agreement with experiment.
Monte Carlo calculations of the energy deposited in biological samples and shielding materials
NASA Astrophysics Data System (ADS)
Akar Tarim, U.; Gurler, O.; Ozmutlu, E. N.; Yalcin, S.
2014-03-01
The energy deposited by gamma radiation from the Cs-137 isotope into body tissues (bone and muscle), tissue-like medium (water), and radiation shielding materials (concrete, lead, and water), which is of interest for radiation dosimetry, was obtained using a simple Monte Carlo algorithm. The algorithm also provides a realistic picture of the distribution of backscattered photons from the target and the distribution of photons scattered forward after several scatterings in the scatterer, which is useful in studying radiation shielding. The presented method in this work constitutes an attempt to evaluate the amount of energy absorbed by body tissues and shielding materials.
Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach
NASA Astrophysics Data System (ADS)
Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume
2016-03-01
Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.
Alrefae, T
2014-12-01
A simple method of efficiency calibration for gamma spectrometry was performed. This method, which focused on measuring the radioactivity of (137)Cs in food samples, was based on Monte Carlo simulations available in the free-of-charge toolkit GEANT4. Experimentally, the efficiency values of a high-purity germanium detector were calculated for three reference materials representing three different food items. These efficiency values were compared with their counterparts produced by a computer code that simulated experimental conditions. Interestingly, the output of the simulation code was in acceptable agreement with the experimental findings, thus validating the proposed method.
Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin
2015-01-01
The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2013-10-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures.
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Mora, Leonor; Martínez, Indira; Figuera, Lourdes; Segura, Merlyn; Del Valle, Guilarte
2010-12-01
In Sucre state, the Manzanares river is threatened by domestic, agricultural and industrial activities, becoming an environmental risk factor for its inhabitants. In this sense, the presence of protozoans in superficial waters of tributaries of the Manzanares river (Orinoco river, Quebrada Seca, San Juan river), Montes municipality, Sucre state, as well as the analysis of faecal samples from inhabitants of towns bordering these tributaries were evaluated. We collected faecal and water samples from may 2006 through april 2007. The superficial water samples were processed after centrifugation by the direct examination and floculation, using lugol, modified Kinyoun and trichromic colorations. Fecal samples where analyzed by direct examination with physiological saline solution and the modified Ritchie concentration method and using the other colorations techniques above mentioned. The most frequently observed protozoans in superficial waters in the three tributaries were: Amoebas, Blastocystis sp, Endolimax sp., Chilomastix sp. and Giardia sp. Whereas in faecal samples, Blastocystis hominis, Endolimax nana and Entaomeba coli had the greatest frequencies in the three communities. The inhabitants of Orinoco La Peña turned out to be most susceptible to these parasitic infections (77.60%), followed by San Juan River (46.63%) and Quebrada Seca (39.49%). The presence of pathogenic and nonpathogenic protozoans in superficial waters demonstrates the faecal contamination of the tributaries, representing a constant focus of infection for their inhabitants, inferred by the observation of the same species in both types of samples.
Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study
ERIC Educational Resources Information Center
Yurdugul, Halil
2008-01-01
The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…
Zhang, Jian; Nielsen, Scott E.; Grainger, Tess N.; Kohler, Monica; Chipchar, Tim; Farr, Daniel R.
2014-01-01
Documenting and estimating species richness at regional or landscape scales has been a major emphasis for conservation efforts, as well as for the development and testing of evolutionary and ecological theory. Rarely, however, are sampling efforts assessed on how they affect detection and estimates of species richness and rarity. In this study, vascular plant richness was sampled in 356 quarter hectare time-unlimited survey plots in the boreal region of northeast Alberta. These surveys consisted of 15,856 observations of 499 vascular plant species (97 considered to be regionally rare) collected by 12 observers over a 2 year period. Average survey time for each quarter-hectare plot was 82 minutes, ranging from 20 to 194 minutes, with a positive relationship between total survey time and total plant richness. When survey time was limited to a 20-minute search, as in other Alberta biodiversity methods, 61 species were missed. Extending the survey time to 60 minutes, reduced the number of missed species to 20, while a 90-minute cut-off time resulted in the loss of 8 species. When surveys were separated by habitat type, 60 minutes of search effort sampled nearly 90% of total observed richness for all habitats. Relative to rare species, time-unlimited surveys had ∼65% higher rare plant detections post-20 minutes than during the first 20 minutes of the survey. Although exhaustive sampling was attempted, observer bias was noted among observers when a subsample of plots was re-surveyed by different observers. Our findings suggest that sampling time, combined with sample size and observer effects, should be considered in landscape-scale plant biodiversity surveys. PMID:24740179
Zhang, Jian; Nielsen, Scott E; Grainger, Tess N; Kohler, Monica; Chipchar, Tim; Farr, Daniel R
2014-01-01
Documenting and estimating species richness at regional or landscape scales has been a major emphasis for conservation efforts, as well as for the development and testing of evolutionary and ecological theory. Rarely, however, are sampling efforts assessed on how they affect detection and estimates of species richness and rarity. In this study, vascular plant richness was sampled in 356 quarter hectare time-unlimited survey plots in the boreal region of northeast Alberta. These surveys consisted of 15,856 observations of 499 vascular plant species (97 considered to be regionally rare) collected by 12 observers over a 2 year period. Average survey time for each quarter-hectare plot was 82 minutes, ranging from 20 to 194 minutes, with a positive relationship between total survey time and total plant richness. When survey time was limited to a 20-minute search, as in other Alberta biodiversity methods, 61 species were missed. Extending the survey time to 60 minutes, reduced the number of missed species to 20, while a 90-minute cut-off time resulted in the loss of 8 species. When surveys were separated by habitat type, 60 minutes of search effort sampled nearly 90% of total observed richness for all habitats. Relative to rare species, time-unlimited surveys had ∼ 65% higher rare plant detections post-20 minutes than during the first 20 minutes of the survey. Although exhaustive sampling was attempted, observer bias was noted among observers when a subsample of plots was re-surveyed by different observers. Our findings suggest that sampling time, combined with sample size and observer effects, should be considered in landscape-scale plant biodiversity surveys.
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
Zhang, Xiaofeng; Badea, Cristian; Hood, Greg; Wetzel, Arthur; Qi, Yi; Stiles, Joel; Johnson, G. Allan
2011-01-01
We present a method for high-resolution reconstruction of fluorescent images of the mouse thorax. It features an anatomically guided sampling method to retrospectively eliminate problematic data and a parallel Monte Carlo software package to compute the Jacobian matrix for the inverse problem. The proposed method was capable of resolving microliter-sized femtomole amount of quantum dot inclusions closely located in the middle of the mouse thorax. The reconstruction was verified against co-registered micro-CT data. Using the proposed method, the new system achieved significantly higher resolution and sensitivity compared to our previous system consisting of the same hardware. This method can be applied to any system utilizing similar imaging principles to improve imaging performance. PMID:21991539
NASA Astrophysics Data System (ADS)
Subramanian, Ramachandran; Schultz, Andrew J.; Kofke, David A.
2017-03-01
We develop an orientation sampling algorithm for rigid diatomic molecules, which allows direct generation of rings of images used for path-integral calculation of nuclear quantum effects. The algorithm treats the diatomic molecule as two independent atoms as opposed to one (quantum) rigid rotor. Configurations are generated according to a solvable approximate distribution that is corrected via the acceptance decision of the Monte Carlo trial. Unlike alternative methods that treat the systems as a quantum rotor, this atom-based approach is better suited for generalization to multi-atomic (more than two atoms) and flexible molecules. We have applied this algorithm in combination with some of the latest ab initio potentials of rigid H2 to compute fully quantum second virial coefficients, for which we observe excellent agreement with both experimental and simulation data from the literature.
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
A new paradigm for petascale Monte Carlo simulation: Replica exchange Wang-Landau sampling
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Vogel, Thomas; Wüst, Thomas; Landau, David P.
2014-05-01
We introduce a generic, parallel Wang-Landau method that is naturally suited to implementation on massively parallel, petaflop supercomputers. The approach introduces a replica-exchange framework in which densities of states for overlapping sub-windows in energy space are determined iteratively by traditional Wang-Landau sampling. The advantages and general applicability of the method are demonstrated for several distinct systems that possess discrete or continuous degrees of freedom, including those with complex free energy landscapes and topological constraints.
Optimal Sampling Efficiency in Monte Carlo Simulation With an Approximate Potential
2009-02-01
Boltzmann sampling of an approximate potential (the “reference” system) is used to build a Markov chain in the isothermal - isobaric ensemble. At the end...in the isothermal - isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level the “full” system and a...1pn. 7 In the isothermal - isobaric ensemble,30 for which the corre- sponding potential is the Gibbs free energy,31 Wi = − Ui + PVi + N ln Vi
Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.
Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam
2009-04-28
Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.
Tundisi, J G; Matsumura-Tundisi, T; Tundisi, J E M; Faria, C R L; Abe, D S; Blanco, F; Rodrigues Filho, J; Campanelli, L; Sidagis Galli, C; Teixeira-Silva, V; Degani, R; Soares, F S; Gatti Junior, P
2015-08-01
In this paper the authors describe the limnological approaches, the sampling methodology, and strategy adopted in the study of the Xingu River in the area of influence of future Belo Monte Power Plant. The river ecosystems are characterized by unidirectional current, highly variable in time depending on the climatic situation the drainage pattern an hydrological cycle. Continuous vertical mixing with currents and turbulence, are characteristic of these ecosystems. All these basic mechanisms were taken into consideration in the sampling strategy and field work carried out in the Xingu River Basin, upstream and downstream the future Belo Monte Power Plant Units.
Fallahpoor, Maryam; Abbasi, Mehrshad; Asghar Parach, Ali; Kalantari, Faraz
2017-02-28
Using digital phantoms as an atlas compared to acquiring CT data for internal radionuclide dosimetry decreases patient overall radiation dose and reduces the required analysis effort and time for organ segmentation. The drawback is that the phantom may not match exactly with the patient. We assessed the effect of varying BMIs on dosimetry results for a bone pain palliation agent, (153)Sm-EDTMP. The simulation was done using the GATE Monte Carlo code. Female XCAT phantoms with the following different BMIs were employed: 18.6, 20.8, 22.1, 26.8, 30.3 and 34.7kg/m(2). S-factors (mGy/MBq.s) and SAFs (kg(-1)) were calculated for the dosimetry of the radiation from major source organs including spine, ribs, kidney and bladder into different target organs as well as whole body dosimetry from spine. The differences in dose estimates from different phantoms compared to those from the phantom with BMI of 26.8kg/m(2) as the reference, were calculated for both gamma and beta radiations. The relative differences (RD) of the S-factors or SAFs from the values of reference phantom were calculated. RDs greater than 10% and 100% were frequent in radiations to organs for photon and beta particles, respectively. The relative differences in whole body SAFs from the reference phantom were 15.4%, 7%, 4.2%, -9.8% and -1.4% for BMIs of 18.6, 20.8, 22.1, 30.3 and 34.7kg/m(2), respectively. The differences in whole body S-factors for the phantoms with BMIs of 18.6, 20.8, 22.1, 30.3 and 34.7kg/m(2) were 39.5%, 19.4%, 8.8%, -7.9% and -4.3%, respectively. The dosimetry of the gamma photons and beta particles changes substantially with the use of phantoms with different BMIs. The change in S-factors is important for dose calculation and can change the prescribed therapeutic dose of (153)Sm-EDTMP. Thus a phantom with BMI better matched to the patient is suggested for therapeutic purposes where dose estimates closer to those in the actual patient are required.
Hierarchical Bayesian modeling and Markov chain Monte Carlo sampling for tuning-curve analysis.
Cronin, Beau; Stevenson, Ian H; Sur, Mriganka; Körding, Konrad P
2010-01-01
A central theme of systems neuroscience is to characterize the tuning of neural responses to sensory stimuli or the production of movement. Statistically, we often want to estimate the parameters of the tuning curve, such as preferred direction, as well as the associated degree of uncertainty, characterized by error bars. Here we present a new sampling-based, Bayesian method that allows the estimation of tuning-curve parameters, the estimation of error bars, and hypothesis testing. This method also provides a useful way of visualizing which tuning curves are compatible with the recorded data. We demonstrate the utility of this approach using recordings of orientation and direction tuning in primary visual cortex, direction of motion tuning in primary motor cortex, and simulated data.
Mamonov, Artem B.; Bhatt, Divesh; Cashman, Derek J.; Ding, Ying; Zuckerman, Daniel M.
2009-01-01
We introduce “library based Monte Carlo” (LBMC) simulation, which performs Boltzmann sampling of molecular systems based on pre-calculated statistical libraries of molecular-fragment configurations, energies, and interactions. The library for each fragment can be Boltzmann distributed and thus account for all correlations internal to the fragment. LBMC can be applied to both atomistic and coarse-grained models, as we demonstrate in this “proof of principle” report. We first verify the approach in a toy model and in implicitly solvated poly-alanine systems. We next study five proteins, up to 309 residues in size. Based on atomistic equilibrium libraries of peptide-plane configurations, the proteins are modeled with fully atomistic backbones and simplified Gō-like interactions among residues. We show that full equilibrium sampling can be obtained in days to weeks on a single processor, suggesting that more accurate models are well within reach. For the future, LBMC provides a convenient platform for constructing adjustable or mixed-resolution models: the configurations of all atoms can be stored at no run-time cost, while an arbitrary subset of interactions is “turned on.” PMID:19594147
Kuruvilla Verghese
2002-04-05
This report summarizes the highlights of the research performed under the 1-year NEER grant from the Department of Energy. The primary goal of this study was to investigate the effects of certain design changes in the Fisher Senoscan mammography system and in the degree of breast compression on the discernability of microcalcifications in calcification clusters often observed in mammograms with tumor lesions. The most important design change that one can contemplate in a digital mammography system to improve resolution of calcifications is the reduction of pixel dimensions of the digital detector. Breast compression is painful to the patient and is though to be a deterrent to women to get routine mammographic screening. Calcification clusters often serve as markers (indicators ) of breast cancer.
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this
NASA Astrophysics Data System (ADS)
Han, Mancheon; Lee, Choong-Ki; Choi, Hyoung Joon
Hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB) is a popular approach in real material researches because it allows to deal with non-density-density-type interaction. In the conventional CT-HYB, we measure Green's function and find the self energy from the Dyson equation. Because one needs to compute the inverse of the statistical data in this approach, obtained self energy is very sensitive to statistical noise. For that reason, the measurement is not reliable except for low frequencies. Such an error can be suppressed by measuring a special type of higher-order correlation function and is implemented for density-density-type interaction. With the help of the recently reported worm-sampling measurement, we developed an improved self energy measurement scheme which can be applied to any type of interactions. As an illustration, we calculated the self energy for the 3-orbital Hubbard-Kanamori-type Hamiltonian with our newly developed method. This work was supported by NRF of Korea (Grant No. 2011-0018306) and KISTI supercomputing center (Project No. KSC-2015-C3-039)
Petaccia, Mauricio; Segui, Silvina; Castellano, Gustavo
2015-06-01
Electron probe microanalysis (EPMA) is based on the comparison of characteristic intensities induced by monoenergetic electrons. When the electron beam ionizes inner atomic shells and these ionizations cause the emission of characteristic X-rays, secondary fluorescence can occur, originating from ionizations induced by X-ray photons produced by the primary electron interactions. As detectors are unable to distinguish the origin of these characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, characteristic secondary fluorescence enhancement in EPMA has been studied by using the splitting routines offered by PENELOPE 2008 as a variance reduction alternative. This approach is controlled by a single parameter NSPLIT, which represents the desired number of X-ray photon replicas. The dependence of the uncertainties associated with secondary intensities on NSPLIT was studied as a function of the accelerating voltage and the sample composition in a simple binary alloy in which this effect becomes relevant. The achieved efficiencies for the simulated secondary intensities bear a remarkable improvement when increasing the NSPLIT parameter; although in most cases an NSPLIT value of 100 is sufficient, some less likely enhancements may require stronger splitting in order to increase the efficiency associated with the simulation of secondary intensities.
Ledra, Mohammed; El Hdiy, Abdelillah
2015-09-21
A Monte-Carlo simulation algorithm is used to study electron beam induced current in an intrinsic silicon sample, which contains at its surface a linear arrangement of uncapped nanocrystals positioned in the irradiation trajectory around the hemispherical collecting nano-contact. The induced current is generated by the use of electron beam energy of 5 keV in a perpendicular configuration. Each nanocrystal is considered as a recombination center, and the surface recombination velocity at the free surface is taken to be zero. It is shown that the induced current is affected by the distance separating each nanocrystal from the nano-contact. An increase of this separation distance translates to a decrease of the nanocrystals density and an increase of the minority carrier diffusion length. The results reveal a threshold separation distance from which nanocrystals have no more effect on the collection efficiency, and the diffusion length reaches the value obtained in the absence of nanocrystals. A cross-section characterizing the nano-contact ability to trap carriers was determined.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2007-12-01
Markov chain Monte Carlo (MCMC) methods are widely used in fields ranging from physics and chemistry, to finance, economics and statistical inference for estimating the average properties of complex systems. The convergence rate of MCMC schemes is often observed, however to be disturbingly low, limiting its practical use in many applications. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves. Here we show that significant improvements to the efficiency of MCMC algorithms can be made by using a self-adaptive Differential Evolution search strategy within a population-based evolutionary framework. This scheme differs fundamentally from existing MCMC algorithms, in that trial jumps are simply a fixed multiple of the difference of randomly chosen members of the population using various genetic operators that are adaptively updated during the search. In addition, the algorithm includes randomized subspace sampling to further improve convergence and acceptance rate. Detailed balance and ergodicity of the algorithm are proved, and hydrologic examples show that the proposed method significantly enhances the efficiency and applicability of MCMC simulations to complex, multi-modal search problems.
An Overview of Importance Splitting for Rare Event Simulation
ERIC Educational Resources Information Center
Morio, Jerome; Pastel, Rudy; Le Gland, Francois
2010-01-01
Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline...
Sampling High-Altitude and Stratified Mating Flights of Red Imported Fire Ant
Technology Transfer Automated Retrieval System (TEKTRAN)
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens ...
NASA Astrophysics Data System (ADS)
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Coalescent: an open-science framework for importance sampling in coalescent theory
Spouge, John L.
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only
NASA Astrophysics Data System (ADS)
Barclay, Thomas; Quintana, Elisa; Adams, Fred; Ciardi, David; Huber, Daniel; Foreman-Mackey, Daniel; Montet, Benjamin Tyler; Caldwell, Douglas
2015-08-01
Kepler-296 is a binary star system with two M-dwarf components separated by 0.2 arcsec. Five transiting planets have been confirmed to be associated with the Kepler-296 system; given the evidence to date, however, the planets could in principle orbit either star. This ambiguity has made it difficult to constrain both the orbital and physical properties of the planets. Using both statistical and analytical arguments, this paper shows that all five planets are highly likely to orbit the primary star in this system. We performed a Markov-Chain Monte Carlo simulation using a five transiting planet model, leaving the stellar density and dilution with uniform priors. Using importance sampling, we compared the model probabilities under the priors of the planets orbiting either the brighter or the fainter component of the binary. A model where the planets orbit the brighter component, Kepler-296A, is strongly preferred by the data. Combined with our assertion that all five planets orbit the same star, the two outer planets in the system, Kepler-296 Ae and Kepler-296 Af, have radii of 1.53 ± 0.26 and 1.80 ± 0.31 R⊕, respectively, and receive incident stellar fluxes of 1.40 ± 0.23 and 0.62 ± 0.10 times the incident flux the Earth receives from the Sun. This level of irradiation places both planets within or close to the circumstellar habitable zone of their parent star.
Smyth, Nina; Thorn, Lisa; Hucklebridge, Frank; Evans, Phil; Clow, Angela
2015-08-01
Indices of post awakening cortisol secretion (PACS), include the rise in cortisol (cortisol awakening response: CAR) and overall cortisol concentrations (e.g., area under the curve with reference to ground: AUCg) in the first 30-45 min. Both are commonly investigated in relation to psychosocial variables. Although sampling within the domestic setting is ecologically valid, participant non-adherence to the required timing protocol results in erroneous measurement of PACS and this may explain discrepancies in the literature linking these measures to trait well-being (TWB). We have previously shown that delays of little over 5 min (between awakening and the start of sampling) to result in erroneous CAR estimates. In this study, we report for the first time on the negative impact of sample timing inaccuracy (verified by electronic-monitoring) on the efficacy to detect significant relationships between PACS and TWB when measured in the domestic setting. Healthy females (N=49, 20.5±2.8 years) selected for differences in TWB collected saliva samples (S1-4) on 4 days at 0, 15, 30, 45 min post awakening, to determine PACS. Adherence to the sampling protocol was objectively monitored using a combination of electronic estimates of awakening (actigraphy) and sampling times (track caps). Relationships between PACS and TWB were found to depend on sample timing accuracy. Lower TWB was associated with higher post awakening cortisol AUCg in proportion to the mean sample timing accuracy (p<.005). There was no association between TWB and the CAR even taking into account sample timing accuracy. These results highlight the importance of careful electronic monitoring of participant adherence for measurement of PACS in the domestic setting. Mean sample timing inaccuracy, mainly associated with delays of >5 min between awakening and collection of sample 1 (median=8 min delay), negatively impacts on the sensitivity of analysis to detect associations between PACS and TWB.
Huang, Wei; Lin, Zhixiong; van Gunsteren, Wilfred F
2014-06-19
The predictive power of biomolecular simulation critically depends on the quality of the force field or molecular model used and on the extent of conformational sampling that can be achieved. Both issues are addressed. First, it is shown that widely used force fields for simulation of proteins in aqueous solution appear to have rather different propensities to stabilize or destabilize α-, π-, and 3(10)- helical structures, which is an important feature of a biomolecular force field due to the omni-presence of such secondary structure in proteins. Second, the relative stability of secondary structure elements in proteins can only be computationally determined through so-called free-energy calculations, the accuracy of which critically depends on the extent of configurational sampling. It is shown that the method of enveloping distribution sampling is a very efficient method to extensively sample different parts of configurational space.
NASA Astrophysics Data System (ADS)
Rees, L. B.
1990-12-01
It has long been recognized that PIXE (particle-induced X-ray emission) spectra from thick targets need to be modified with respect to the thin target spectra used for calibration. This is due to the degradation of the energy of the protons entering the sample and the attenuation of the X-rays emerging from the sample. Thick-target corrections typically assume the target to be composed of a layer of sample material having uniform thickness. Because many environmental samples, however, are composed of particles averaging several μm in diameter, the usual thick-target corrections are inappropriate. It has previously been shown that size corrections for spherical particles of homogeneous composition can be significant. In the current work a method is presented which employs Monte Carlo techniques to calculate X-ray intensity corrections for particles of arbitrary shape, composition, orientation and size distribution. Empirical equations for proton stopping power and X-ray production cross sections are used in conjunction with X-ray attenuation coefficients to calculate the intensity of the emergent beam. The uncertainty associated with the Monte Carlo calculation is also explored. It is shown that the spherical particle corrections are approximately correct for particles of near-spherical shape; however, they are inadequate for highly elongated or flattened particles or for particles of nonuniform composition.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Catching Stardust and Bringing it Home: The Astronomical Importance of Sample Return
NASA Astrophysics Data System (ADS)
Brownlee, D.
2002-12-01
orbit of Mars will provide important insight into the materials, environments and processes that occurred from the central regions to outer fringes of the solar nebula. One of the most exciting aspects of the January 2006 return of comet samples will be the synergistic linking of data on real comet and interstellar dust samples with the vast amount of astronomical data on these materials and analogous particles that orbit other stars Stardust is a NASA Discovery mission that has successfully traveled over 2.5 billion kilometers.
Importance sampling variance reduction for the Fokker-Planck rarefied gas particle method
NASA Astrophysics Data System (ADS)
Collyer, B. S.; Connaughton, C.; Lockerby, D. A.
2016-11-01
The Fokker-Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find that our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.
Fragoso, Zachary L; Holcombe, Kyla J; McCluney, Courtney L; Fisher, Gwenith G; McGonagle, Alyssa K; Friebe, Susan J
2016-06-09
This study's purpose was twofold: first, to examine the relative importance of job demands and resources as predictors of burnout and engagement, and second, the relative importance of engagement and burnout related to health, depressive symptoms, work ability, organizational commitment, and turnover intentions in two samples of health care workers. Nurse leaders (n = 162) and licensed emergency medical technicians (EMTs; n = 102) completed surveys. In both samples, job demands predicted burnout more strongly than job resources, and job resources predicted engagement more strongly than job demands. Engagement held more weight than burnout for predicting commitment, and burnout held more weight for predicting health outcomes, depressive symptoms, and work ability. Results have implications for the design, evaluation, and effectiveness of workplace interventions to reduce burnout and improve engagement among health care workers. Actionable recommendations for increasing engagement and decreasing burnout in health care organizations are provided.
ROMERO,VICENTE J.
2000-05-04
In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.
Salter, Tara La Roche; Bunch, Josephine; Gilmore, Ian S
2014-09-16
Many different types of samples have been analyzed in the literature using plasma-based ambient mass spectrometry sources; however, comprehensive studies of the important parameters for analysis are only just beginning. Here, we investigate the effect of the sample form and surface temperature on the signal intensities in plasma-assisted desorption ionization (PADI). The form of the sample is very important, with powders of all volatilities effectively analyzed. However, for the analysis of thin films at room temperature and using a low plasma power, a vapor pressure of greater than 10(-4) Pa is required to achieve a sufficiently good quality spectrum. Using thermal desorption, we are able to increase the signal intensity of less volatile materials with vapor pressures less than 10(-4) Pa, in thin film form, by between 4 and 7 orders of magnitude. This is achieved by increasing the temperature of the sample up to a maximum of 200 °C. Thermal desorption can also increase the signal intensity for the analysis of powders.
Silvia, Paul J; Kwapil, Thomas R; Walsh, Molly A; Myin-Germeys, Inez
2014-03-01
Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler's toolbox.
The importance of a priori sample size estimation in strength and conditioning research.
Beck, Travis W
2013-08-01
The statistical power, or sensitivity of an experiment, is defined as the probability of rejecting a false null hypothesis. Only 3 factors can affect statistical power: (a) the significance level (α), (b) the magnitude or size of the treatment effect (effect size), and (c) the sample size (n). Of these 3 factors, only the sample size can be manipulated by the investigator because the significance level is usually selected before the study, and the effect size is determined by the effectiveness of the treatment. Thus, selection of an appropriate sample size is one of the most important components of research design but is often misunderstood by beginning researchers. The purpose of this tutorial is to describe procedures for estimating sample size for a variety of different experimental designs that are common in strength and conditioning research. Emphasis is placed on selecting an appropriate effect size because this step fully determines sample size when power and the significance level are fixed. There are many different software packages that can be used for sample size estimation. However, I chose to describe the procedures for the G*Power software package (version 3.1.4) because this software is freely downloadable and capable of estimating sample size for many of the different statistical tests used in strength and conditioning research. Furthermore, G*Power provides a number of different auxiliary features that can be useful for researchers when designing studies. It is my hope that the procedures described in this article will be beneficial for researchers in the field of strength and conditioning.
NASA Technical Reports Server (NTRS)
Welzenbach, L. C.; McCoy, T. J.; Glavin, D. P.; Dworkin, J. P.; Abell, P. A.
2012-01-01
turn led to a new wave of Mars exploration that ultimately could lead to sample return focused on evidence for past or present life. This partnership between collections and missions will be increasingly important in the coming decades as we discover new questions to be addressed and identify targets for for both robotic and human exploration . Nowhere is this more true than in the ultimate search for the abiotic and biotic processes that produced life. Existing collections also provide the essential materials for developing and testing new analytical schemes to detect the rare markers of life and distinguish them from abiotic processes. Large collections of meteorites and the new types being identified within these collections, which come to us at a fraction of the cost of a sample return mission, will continue to shape the objectives of future missions and provide new ways of interpreting returned samples.
Sampling high-altitude and stratified mating flights of red imported fire ant.
Fritz, Gary N; Fritz, Ann H; Vander Meer, Robert K
2011-05-01
With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens and males during mating flights at altitudinal intervals reaching as high as "140 m. Our trapping system uses an electric winch and a 1.2-m spindle bolted to a swiveling platform. The winch dispenses up to 183 m of Kevlar-core, nylon rope and the spindle stores 10 panels (0.9 by 4.6 m each) of nylon tulle impregnated with Tangle-Trap. The panels can be attached to the rope at various intervals and hoisted into the air by using a 3-m-diameter, helium-filled balloon. Raising or lowering all 10 panels takes approximately 15-20 min. This trap also should be useful for altitudinal sampling of other insects of medical importance.
Tang, Ke; Zhang, Jinfeng; Liang, Jie
2017-01-10
Antibodies recognize antigens through the complementary determining regions (CDR) formed by six-loop hypervariable regions crucial for the diversity of antigen specificities. Among the six CDR loops, the H3 loop is the most challenging to predict because of its much higher variation in sequence length and identity, resulting in much larger and complex structural space, compared to the other five loops. We developed a novel method based on a chain-growth sequential Monte Carlo method, called distance-guided sequential chain-growth Monte Carlo for H3 loops (DiSGro-H3). The new method samples protein chains in both forward and backward directions. It can efficiently generate low energy, near-native H3 loop structures using the conformation types predicted from the sequences of H3 loops. DiSGro-H3 performs significantly better than another ab initio method, RosettaAntibody, in both sampling and prediction, while taking less computational time. It performs comparably to template-based methods. As an ab initio method, DiSGro-H3 offers satisfactory accuracy while being able to predict any H3 loops without templates.
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less
Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.
2016-01-23
Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 and tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.
Baba, Justin S; Koju, Vijay; John, Dwayne O
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Baccouche, S; Al-Azmi, D; Karunakara, N; Trabelsi, A
2012-01-01
Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides (137)Cs (661keV), (40)K (1460keV), (238)U ((214)Bi, 1764keV) and (232)Th ((208)Tl, 2614keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614keV emission of (208)Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples.
Egger, C; Maurer, M
2015-04-15
Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty.
Biau, David Jean; Kernéis, Solen; Porcher, Raphaël
2008-09-01
The increasing volume of research by the medical community often leads to increasing numbers of contradictory findings and conclusions. Although the differences observed may represent true differences, the results also may differ because of sampling variability as all studies are performed on a limited number of specimens or patients. When planning a study reporting differences among groups of patients or describing some variable in a single group, sample size should be considered because it allows the researcher to control for the risk of reporting a false-negative finding (Type II error) or to estimate the precision his or her experiment will yield. Equally important, readers of medical journals should understand sample size because such understanding is essential to interpret the relevance of a finding with regard to their own patients. At the time of planning, the investigator must establish (1) a justifiable level of statistical significance, (2) the chances of detecting a difference of given magnitude between the groups compared, ie, the power, (3) this targeted difference (ie, effect size), and (4) the variability of the data (for quantitative data). We believe correct planning of experiments is an ethical issue of concern to the entire community.
Blood Sampling Seasonality as an Important Preanalytical Factor for Assessment of Vitamin D Status
Bonelli, Patrizia; Buonocore, Ruggero; Aloe, Rosalia
2016-01-01
Summary Background The measurement of vitamin D is now commonplace for preventing osteoporosis and restoring an appropriate concentration that would be effective to counteract the occurrence of other human disorders. The aim of this study was to establish whether blood sampling seasonality may influence total vitamin D concentration in a general population of Italian unselected outpatients. Methods We performed a retrospective search in the laboratory information system of the University Hospital of Parma (Italy, temperate climate), to identify the values of total serum vitamin D (25-hydroxyvitamin D) measured in outpatients aged 18 years and older, who were referred for routine health check-up during the entire year 2014. Results The study population consisted in 11,150 outpatients (median age 62 years; 8592 women and 2558 men). The concentration of vitamin D was consistently lower in samples collected in Winter than in the other three seasons. The frequency of subjects with vitamin D deficiency was approximately double in samples drawn in Winter and Spring than in Summer and Autumn. In the multivariate analysis, the concentration of total vitamin D was found to be independently associated with sex and season of blood testing, but not with the age of the patients. Conclusions According to these findings, blood sampling seasonality should be regarded as an important preanalytical factor in vitamin D assessment. It is also reasonable to suggest that the amount of total vitamin D synthesized during the summer should be high enough to maintain the levels > 50 nmol/L throughout the remaining part of the year. PMID:28356869
Carvalho, Ana Maria; Frazão-Moreira, Amélia
2011-11-23
Many European protected areas were legally created to preserve and maintain biological diversity, unique natural features and associated cultural heritage. Built over centuries as a result of geographical and historical factors interacting with human activity, these territories are reservoirs of resources, practices and knowledge that have been the essential basis of their creation. Under social and economical transformations several components of such areas tend to be affected and their protection status endangered.Carrying out ethnobotanical surveys and extensive field work using anthropological methodologies, particularly with key-informants, we report changes observed and perceived in two natural parks in Trás-os-Montes, Portugal, that affect local plant-use systems and consequently local knowledge. By means of informants' testimonies and of our own observation and experience we discuss the importance of local knowledge and of local communities' participation to protected areas design, management and maintenance. We confirm that local knowledge provides new insights and opportunities for sustainable and multipurpose use of resources and offers contemporary strategies for preserving cultural and ecological diversity, which are the main purposes and challenges of protected areas. To be successful it is absolutely necessary to make people active participants, not simply integrate and validate their knowledge and expertise. Local knowledge is also an interesting tool for educational and promotional programs.
2011-01-01
Many European protected areas were legally created to preserve and maintain biological diversity, unique natural features and associated cultural heritage. Built over centuries as a result of geographical and historical factors interacting with human activity, these territories are reservoirs of resources, practices and knowledge that have been the essential basis of their creation. Under social and economical transformations several components of such areas tend to be affected and their protection status endangered. Carrying out ethnobotanical surveys and extensive field work using anthropological methodologies, particularly with key-informants, we report changes observed and perceived in two natural parks in Trás-os-Montes, Portugal, that affect local plant-use systems and consequently local knowledge. By means of informants' testimonies and of our own observation and experience we discuss the importance of local knowledge and of local communities' participation to protected areas design, management and maintenance. We confirm that local knowledge provides new insights and opportunities for sustainable and multipurpose use of resources and offers contemporary strategies for preserving cultural and ecological diversity, which are the main purposes and challenges of protected areas. To be successful it is absolutely necessary to make people active participants, not simply integrate and validate their knowledge and expertise. Local knowledge is also an interesting tool for educational and promotional programs. PMID:22112242
Reconstruction of Monte Carlo replicas from Hessian parton distributions
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.
2017-03-01
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Denning, Elizabeth J.; Woolf, Thomas B.
2009-01-01
The growing dataset of K+ channel x-ray structures provides an excellent opportunity to begin a detailed molecular understanding of voltage-dependent gating. These structures, while differing in sequence, represent either a stable open or closed state. However, an understanding of the molecular details of gating will require models for the transitions and experimentally testable predictions for the gating transition. To explore these ideas, we apply Dynamic Importance Sampling (DIMS) to a set of homology models for the molecular conformations of K+ channels for four different sets of sequences and eight different states. In our results, we highlight the importance of particular residues upstream from the PVP region to the gating transition. This supports growing evidence that the PVP region is important for influencing the flexibility of the S6 helix and thus the opening of the gating domain. The results further suggest how gating on the molecular level depends on intra-subunit motions to influence the cooperative behavior of all four subunits of the K+ channel. We hypothesize that the gating process occurs in steps: first sidechain movement, then inter- S5-S6 subunit motions, and lastly the large-scale domain rearrangements. PMID:19950367
2009-08-01
method [JChem. Phys. 130, 164104(2009) is applied to fluid N2. In this implementation of n(MC)2, isothermal - isobaric (NPT) ensemble sampling on the...Phys. 130, 164104 2009 is applied to fluid N2. In this implementation of nMC2, isothermal - isobaric NPT ensemble sampling on the basis of a pair...and Wk is a thermodynamic function appropriate to the ensemble being sampled. In the isothermal – isobaric NPT ensemble used below, W is defined as Wk
Shreif, Zeina; Striegel, Deborah A; Periwal, Vipul
2015-09-07
A nucleotide sequence 35 base pairs long can take 1,180,591,620,717,411,303,424 possible values. An example of systems biology datasets, protein binding microarrays, contain activity data from about 40,000 such sequences. The discrepancy between the number of possible configurations and the available activities is enormous. Thus, albeit that systems biology datasets are large in absolute terms, they oftentimes require methods developed for rare events due to the combinatorial increase in the number of possible configurations of biological systems. A plethora of techniques for handling large datasets, such as Empirical Bayes, or rare events, such as importance sampling, have been developed in the literature, but these cannot always be simultaneously utilized. Here we introduce a principled approach to Empirical Bayes based on importance sampling, information theory, and theoretical physics in the general context of sequence phenotype model induction. We present the analytical calculations that underlie our approach. We demonstrate the computational efficiency of the approach on concrete examples, and demonstrate its efficacy by applying the theory to publicly available protein binding microarray transcription factor datasets and to data on synthetic cAMP-regulated enhancer sequences. As further demonstrations, we find transcription factor binding motifs, predict the activity of new sequences and extract the locations of transcription factor binding sites. In summary, we present a novel method that is efficient (requiring minimal computational time and reasonable amounts of memory), has high predictive power that is comparable with that of models with hundreds of parameters, and has a limited number of optimized parameters, proportional to the sequence length.
NASA Astrophysics Data System (ADS)
Chiruta, D.; Linares, J.; Dahoo, P. R.; Dimian, M.
2015-02-01
In spin crossover (SCO) systems, the shape of the hysteresis curves are closely related to the interactions between the molecules, which these play an important role in the response of the system to an external parameter. The effects of short-range interactions on the different shape of the spin transition phenomena were investigated. In this contribution we solve the corresponding Hamiltonian for a three-dimensional SCO system taking into account short-range and long-range interaction using a biased Monte Carlo entropic sampling technique and a semi-analytical method. We discuss the competition between the two interactions which governs the low spin (LS) - high spin (HS) process for a three-dimensional network and the cooperative effects. We demonstrate a strong correlation between the shape of the transition and the strength of short-range interaction between molecules and we identified the role of the size for SCO systems.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... independent laboratory shall also include with the retained sample the test result for benzene as...
Alfaro, Michael E; Zoller, Stefan; Lutzoni, François
2003-02-01
Bayesian Markov chain Monte Carlo sampling has become increasingly popular in phylogenetics as a method for both estimating the maximum likelihood topology and for assessing nodal confidence. Despite the growing use of posterior probabilities, the relationship between the Bayesian measure of confidence and the most commonly used confidence measure in phylogenetics, the nonparametric bootstrap proportion, is poorly understood. We used computer simulation to investigate the behavior of three phylogenetic confidence methods: Bayesian posterior probabilities calculated via Markov chain Monte Carlo sampling (BMCMC-PP), maximum likelihood bootstrap proportion (ML-BP), and maximum parsimony bootstrap proportion (MP-BP). We simulated the evolution of DNA sequence on 17-taxon topologies under 18 evolutionary scenarios and examined the performance of these methods in assigning confidence to correct monophyletic and incorrect monophyletic groups, and we examined the effects of increasing character number on support value. BMCMC-PP and ML-BP were often strongly correlated with one another but could provide substantially different estimates of support on short internodes. In contrast, BMCMC-PP correlated poorly with MP-BP across most of the simulation conditions that we examined. For a given threshold value, more correct monophyletic groups were supported by BMCMC-PP than by either ML-BP or MP-BP. When threshold values were chosen that fixed the rate of accepting incorrect monophyletic relationship as true at 5%, all three methods recovered most of the correct relationships on the simulated topologies, although BMCMC-PP and ML-BP performed better than MP-BP. BMCMC-PP was usually a less biased predictor of phylogenetic accuracy than either bootstrapping method. BMCMC-PP provided high support values for correct topological bipartitions with fewer characters than was needed for nonparametric bootstrap.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements... include with the retained sample the test result for benzene as conducted pursuant to § 80.46(e). (b... sample the test result for benzene as conducted pursuant to § 80.47....
40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Practice for Manual Sampling of Petroleum and Petroleum Products.” (ii) Samples collected under the... present that could affect the sulfur test result. (2) Automatic sampling of petroleum products in..., entitled “Standard Practice for Automatic Sampling of Petroleum and Petroleum Products.” (c) Test...
40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Practice for Manual Sampling of Petroleum and Petroleum Products.” (ii) Samples collected under the... present that could affect the sulfur test result. (2) Automatic sampling of petroleum products in..., entitled “Standard Practice for Automatic Sampling of Petroleum and Petroleum Products.” (c) Test...
Randeniya, S; Mirkovic, D; Titt, U; Guan, F; Mohan, R
2014-06-01
Purpose: In intensity modulated proton therapy (IMPT), energy dependent, protons per monitor unit (MU) calibration factors are important parameters that determine absolute dose values from energy deposition data obtained from Monte Carlo (MC) simulations. Purpose of this study was to assess the sensitivity of MC-computed absolute dose distributions to the protons/MU calibration factors in IMPT. Methods: A “verification plan” (i.e., treatment beams applied individually to water phantom) of a head and neck patient plan was calculated using MC technique. The patient plan had three beams; one posterior-anterior (PA); two anterior oblique. Dose prescription was 66 Gy in 30 fractions. Of the total MUs, 58% was delivered in PA beam, 25% and 17% in other two. Energy deposition data obtained from the MC simulation were converted to Gy using energy dependent protons/MU calibrations factors obtained from two methods. First method is based on experimental measurements and MC simulations. Second is based on hand calculations, based on how many ion pairs were produced per proton in the dose monitor and how many ion pairs is equal to 1 MU (vendor recommended method). Dose distributions obtained from method one was compared with those from method two. Results: Average difference of 8% in protons/MU calibration factors between method one and two converted into 27 % difference in absolute dose values for PA beam; although dose distributions preserved the shape of 3D dose distribution qualitatively, they were different quantitatively. For two oblique beams, significant difference in absolute dose was not observed. Conclusion: Results demonstrate that protons/MU calibration factors can have a significant impact on absolute dose values in IMPT depending on the fraction of MUs delivered. When number of MUs increases the effect due to the calibration factors amplify. In determining protons/MU calibration factors, experimental method should be preferred in MC dose calculations
Fontanot, Marco; Iacumin, Lucilla; Cecchini, Francesca; Comi, Giuseppe; Manzano, Marisa
2014-10-01
The detection of Campylobacter, the most commonly reported cause of foodborne gastroenteritis in the European Union, is very important for human health. The most commonly recognised risk factor for infection is the handling and/or consumption of undercooked poultry meat. The methods typically applied to evaluate the presence/absence of Campylobacter in food samples are direct plating and/or enrichment culture based on the Horizontal Method for Detection and Enumeration of Campylobacter spp. (ISO 10272-1B: 2006) and PCR. Molecular methods also allow for the detection of cells that are viable but cannot be cultivated on agar media and that decrease the time required for species identification. The current study proposes the use of two molecular methods for species identification: dot blot and PCR. The dot blot method had a sensitivity of 25 ng for detection of DNA extracted from a pure culture using a digoxigenin-labelled probe for hybridisation; the target DNA was extracted from the enrichment broth at 24 h. PCR was performed using a pair of sensitive and specific primers for the detection of Campylobacter jejuni and Campylobacter coli after 24 h of enrichment in Preston broth. The initial samples were contaminated by 5 × 10 C. jejuni cells/g and 1.5 × 10(2)C. coli cells/g, thus the number of cells present in the enrichment broth at 0 h was 1 or 3 cell/g, respectively.
Aberer, Andre J; Stamatakis, Alexandros; Ronquist, Fredrik
2016-01-01
Sampling tree space is the most challenging aspect of Bayesian phylogenetic inference. The sheer number of alternative topologies is problematic by itself. In addition, the complex dependency between branch lengths and topology increases the difficulty of moving efficiently among topologies. Current tree proposals are fast but sample new trees using primitive transformations or re-mappings of old branch lengths. This reduces acceptance rates and presumably slows down convergence and mixing. Here, we explore branch proposals that do not rely on old branch lengths but instead are based on approximations of the conditional posterior. Using a diverse set of empirical data sets, we show that most conditional branch posteriors can be accurately approximated via a [Formula: see text] distribution. We empirically determine the relationship between the logarithmic conditional posterior density, its derivatives, and the characteristics of the branch posterior. We use these relationships to derive an independence sampler for proposing branches with an acceptance ratio of ~90% on most data sets. This proposal samples branches between 2× and 3× more efficiently than traditional proposals with respect to the effective sample size per unit of runtime. We also compare the performance of standard topology proposals with hybrid proposals that use the new independence sampler to update those branches that are most affected by the topological change. Our results show that hybrid proposals can sometimes noticeably decrease the number of generations necessary for topological convergence. Inconsistent performance gains indicate that branch updates are not the limiting factor in improving topological convergence for the currently employed set of proposals. However, our independence sampler might be essential for the construction of novel tree proposals that apply more radical topology changes.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
NASA Astrophysics Data System (ADS)
Pavlou, Andrew Theodore
The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data
Fenley, Marcia O; Mascagni, Michael; McClain, James; Silalahi, Alexander R J; Simonov, Nikolai A
2010-01-01
Dielectric continuum or implicit solvent models provide a significant reduction in computational cost when accounting for the salt-mediated electrostatic interactions of biomolecules immersed in an ionic environment. These models, in which the solvent and ions are replaced by a dielectric continuum, seek to capture the average statistical effects of the ionic solvent, while the solute is treated at the atomic level of detail. For decades, the solution of the three-dimensional Poisson-Boltzmann equation (PBE), which has become a standard implicit-solvent tool for assessing electrostatic effects in biomolecular systems, has been based on various deterministic numerical methods. Some deterministic PBE algorithms have drawbacks, which include a lack of properly assessing their accuracy, geometrical difficulties caused by discretization, and for some problems their cost in both memory and computation time. Our original stochastic method resolves some of these difficulties by solving the PBE using the Monte Carlo method (MCM). This new approach to the PBE is capable of efficiently solving complex, multi-domain and salt-dependent problems in biomolecular continuum electrostatics to high precision. Here we improve upon our novel stochastic approach by simultaneouly computating of electrostatic potential and solvation free energies at different ionic concentrations through correlated Monte Carlo (MC) sampling. By using carefully constructed correlated random walks in our algorithm, we can actually compute the solution to a standard system including the linearized PBE (LPBE) at all salt concentrations of interest, simultaneously. This approach not only accelerates our MCPBE algorithm, but seems to have cost and accuracy advantages over deterministic methods as well. We verify the effectiveness of this technique by applying it to two common electrostatic computations: the electrostatic potential and polar solvation free energy for calcium binding proteins that are compared
Morton, S E; Chiew, Y S; Pretty, C; Moltchanova, E; Scarrott, C; Redmond, D; Shaw, G M; Chase, J G
2017-02-01
Randomised control trials have sought to seek to improve mechanical ventilation treatment. However, few trials to date have shown clinical significance. It is hypothesised that aside from effective treatment, the outcome metrics and sample sizes of the trial also affect the significance, and thus impact trial design. In this study, a Monte-Carlo simulation method was developed and used to investigate several outcome metrics of ventilation treatment, including 1) length of mechanical ventilation (LoMV); 2) Ventilator Free Days (VFD); and 3) LoMV-28, a combination of the other metrics. As these metrics have highly skewed distributions, it also investigated the impact of imposing clinically relevant exclusion criteria on study power to enable better design for significance. Data from invasively ventilated patients from a single intensive care unit were used in this analysis to demonstrate the method. Use of LoMV as an outcome metric required 160 patients/arm to reach 80% power with a clinically expected intervention difference of 25% LoMV if clinically relevant exclusion criteria were applied to the cohort, but 400 patients/arm if they were not. However, only 130 patients/arm would be required for the same statistical significance at the same intervention difference if VFD was used. A Monte-Carlo simulation approach using local cohort data combined with objective patient selection criteria can yield better design of ventilation studies to desired power and significance, with fewer patients per arm than traditional trial design methods, which in turn reduces patient risk. Outcome metrics, such as VFD, should be used when a difference in mortality is also expected between the two cohorts. Finally, the non-parametric approach taken is readily generalisable to a range of trial types where outcome data is similarly skewed.
GeoLab Concept: The Importance of Sample Selection During Long Duration Human Exploration Mission
NASA Technical Reports Server (NTRS)
Calaway, M. J.; Evans, C. A.; Bell, M. S.; Graff, T. G.
2011-01-01
In the future when humans explore planetary surfaces on the Moon, Mars, and asteroids or beyond, the return of geologic samples to Earth will be a high priority for human spaceflight operations. All future sample return missions will have strict down-mass and volume requirements; methods for in-situ sample assessment and prioritization will be critical for selecting the best samples for return-to-Earth.
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention...
40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention...
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of certified ethanol denaturant. 80.1644 Section 80.1644 Protection of Environment... ethanol denaturant. (a) Sample and test each batch of certified ethanol denaturant. (1) Producers and importers of certified ethanol denaturant shall collect a representative sample from each batch of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable. (b) Quality assurance program. The importer must conduct a quality assurance program, as specified in this paragraph (b), for each truck or rail car loading terminal. (1) Quality assurance samples... frequency of the quality assurance sampling and testing must be at least one sample for each 50 of...
Ciccotti, Giovanni; Meloni, Simone
2011-04-07
We introduce a new method to simulate the physics of rare events. The method, an extension of the Temperature Accelerated Molecular Dynamics, comes in use when the collective variables introduced to characterize the rare events are either non-analytical or so complex that computing their derivative is not practical. We illustrate the functioning of the method by studying the homogeneous crystallization in a sample of Lennard-Jones particles. The process is studied by introducing a new collective variable that we call Effective Nucleus Size N. We have computed the free energy barriers and the size of critical nucleus, which result in agreement with data available in the literature. We have also performed simulations in the liquid domain of the phase diagram. We found a free energy curve monotonically growing with the nucleus size, consistent with the liquid domain.
NASA Astrophysics Data System (ADS)
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
Sampling Small Mammals in Southeastern Forests: The Importance of Trapping in Trees
Loeb, S.C.; Chapman, G.L.; Ridley, T.R.
1999-01-01
We investigated the effect of sampling methodology on the richness and abundance of small mammal communities in loblolly pine forests. Trapping in trees using Sherman live traps was included along with routine ground trapping using the same device. Estimates of species richness did not differ among samples in which tree traps were included or excluded. However, diversity indeces (Shannon-Wiener, Simpson, Shannon and Brillouin) were strongly effected. The indeces were significantly greater than if tree samples were included primarily the result of flying squirrel captures. Without tree traps, the results suggested that cotton mince dominated the community. We recommend that tree traps we included in sampling.
Balokovic, M.; Smolcic, V.; Ivezic, Z.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.
2012-11-01
We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this..., 2015, to determine its benzene concentration for compliance with the requirements of this...
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Sampling, Testing and Retention Requirements § 80.1347 What are the sampling and testing... benzene requirements of this subpart, except as modified by paragraphs (a)(2), (a)(3) and (a)(4) of this... benzene concentration for compliance with the requirements of this subpart. (ii) Independent...
Bandpass Sampling--An Opportunity to Stress the Importance of In-Depth Understanding
ERIC Educational Resources Information Center
Stern, Harold P. E.
2010-01-01
Many bandpass signals can be sampled at rates lower than the Nyquist rate, allowing significant practical advantages. Illustrating this phenomenon after discussing (and proving) Shannon's sampling theorem provides a valuable opportunity for an instructor to reinforce the principle that innovation is possible when students strive to have a complete…
Code of Federal Regulations, 2012 CFR
2012-07-01
... requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel...; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Sampling and Testing § 80.583 What... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the...
The Importance of Sample Processing in Analysis of Asbestos Content in Rocks and Soils
NASA Astrophysics Data System (ADS)
Neumann, R. D.; Wright, J.
2012-12-01
Analysis of asbestos content in rocks and soils using Air Resources Board (ARB) Test Method 435 (M435) involves the processing of samples for subsequent analysis by polarized light microscopy (PLM). The use of different equipment and procedures by commercial laboratories to pulverize rock and soil samples could result in different particle size distributions. It has long been theorized that asbestos-containing samples can be over-pulverized to the point where the particle dimensions of the asbestos no longer meet the required 3:1 length-to-width aspect ratio or the particles become so small that they no longer can be tested for optical characteristics using PLM where maximum PLM magnification is typically 400X. Recent work has shed some light on this issue. ARB staff conducted an interlaboratory study to investigate variability in preparation and analytical procedures used by laboratories performing M435 analysis. With regard to sample processing, ARB staff found that different pulverization equipment and processing procedures produced powders that have varying particle size distributions. PLM analysis of the finest powders produced by one laboratory showed all but one of the 12 samples were non-detect or below the PLM reporting limit; in contrast to the other 36 coarser samples from the same field sample and processed by three other laboratories where 21 samples were above the reporting limit. The set of 12, exceptionally fine powder samples produced by the same laboratory was re-analyzed by transmission electron microscopy (TEM) and results showed that these samples contained asbestos above the TEM reporting limit. However, the use of TEM as a stand-alone analytical procedure, usually performed at magnifications between 3,000 to 20,000X, also has its drawbacks because of the miniscule mass of sample that this method examines. The small amount of powder analyzed by TEM may not be representative of the field sample. The actual mass of the sample powder analyzed by
Lakkaraju, Sirish Kaushik; Raman, E Prabhu; Yu, Wenbo; MacKerell, Alexander D
2014-06-10
Solute sampling of explicit bulk-phase aqueous environments in grand canonical (GC) ensemble simulations suffer from poor convergence due to low insertion probabilities of the solutes. To address this, we developed an iterative procedure involving Grand Canonical-like Monte Carlo (GCMC) and molecular dynamics (MD) simulations. Each iteration involves GCMC of both the solutes and water followed by MD, with the excess chemical potential (μex) of both the solute and the water oscillated to attain their target concentrations in the simulation system. By periodically varying the μex of the water and solutes over the GCMC-MD iterations, solute exchange probabilities and the spatial distributions of the solutes improved. The utility of the oscillating-μex GCMC-MD method is indicated by its ability to approximate the hydration free energy (HFE) of the individual solutes in aqueous solution as well as in dilute aqueous mixtures of multiple solutes. For seven organic solutes: benzene, propane, acetaldehyde, methanol, formamide, acetate, and methylammonium, the average μex of the solutes and the water converged close to their respective HFEs in both 1 M standard state and dilute aqueous mixture systems. The oscillating-μex GCMC methodology is also able to drive solute sampling in proteins in aqueous environments as shown using the occluded binding pocket of the T4 lysozyme L99A mutant as a model system. The approach was shown to satisfactorily reproduce the free energy of binding of benzene as well as sample the functional group requirements of the occluded pocket consistent with the crystal structures of known ligands bound to the L99A mutant as well as their relative binding affinities.
Tarquini, Gabriele; Nunziante Cesaro, Stella; Campanella, Luigi
2014-01-01
The application of Fourier Transform InfraRed (FTIR) spectroscopy to the analysis of oil residues in fragments of archeological amphorae (3rd century A.D.) from Monte Testaccio (Rome, Italy) is reported. In order to check the possibility to reveal the presence of oil residues in archeological pottery using microinvasive and\\or not invasive techniques, different approaches have been followed: firstly, FTIR spectroscopy was used to study oil residues extracted from roman amphorae. Secondly, the presence of oil residues was ascertained analyzing microamounts of archeological fragments with the Diffuse Reflectance Infrared Spectroscopy (DRIFT). Finally, the external reflection analysis of the ancient shards was performed without preliminary treatments evidencing the possibility to detect oil traces through the observation of the most intense features of its spectrum. Incidentally, the existence of carboxylate salts of fatty acids was also observed in DRIFT and Reflectance spectra of archeological samples supporting the roman habit of spreading lime over the spoil heaps. The data collected in all steps were always compared with results obtained on purposely made replicas.
A Classroom Note on Monte Carlo Integration.
ERIC Educational Resources Information Center
Kolpas, Sid
1998-01-01
The Monte Carlo method provides approximate solutions to a variety of mathematical problems by performing random sampling simulations with a computer. Presents a program written in Quick BASIC simulating the steps of the Monte Carlo method. (ASK)
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Quantum speedup of Monte Carlo methods
Montanaro, Ashley
2015-01-01
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better
Code of Federal Regulations, 2013 CFR
2013-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Code of Federal Regulations, 2011 CFR
2011-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Code of Federal Regulations, 2014 CFR
2014-07-01
... by truck or rail car? 80.583 Section 80.583 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... diesel fuel, or ECA marine fuel by truck or rail car? Importers who import diesel fuel subject to the 15... car may comply with the following requirements instead of the requirements to sample and test...
Importance of sampling design and analysis in animal population studies: a comment on Sergio et al
Kery, M.; Royle, J. Andrew; Schmid, Hans
2008-01-01
1. The use of predators as indicators and umbrellas in conservation has been criticized. In the Trentino region, Sergio et al. (2006; hereafter SEA) counted almost twice as many bird species in quadrats located in raptor territories than in controls. However, SEA detected astonishingly few species. We used contemporary Swiss Breeding Bird Survey data from an adjacent region and a novel statistical model that corrects for overlooked species to estimate the expected number of bird species per quadrat in that region. 2. There are two anomalies in SEA which render their results ambiguous. First, SEA detected on average only 6.8 species, whereas a value of 32 might be expected. Hence, they probably overlooked almost 80% of all species. Secondly, the precision of their mean species counts was greater in two-thirds of cases than in the unlikely case that all quadrats harboured exactly the same number of equally detectable species. This suggests that they detected consistently only a biased, unrepresentative subset of species. 3. Conceptually, expected species counts are the product of true species number and species detectability p. Plenty of factors may affect p, including date, hour, observer, previous knowledge of a site and mobbing behaviour of passerines in the presence of predators. Such differences in p between raptor and control quadrats could have easily created the observed effects. Without a method that corrects for such biases, or without quantitative evidence that species detectability was indeed similar between raptor and control quadrats, the meaning of SEA's counts is hard to evaluate. Therefore, the evidence presented by SEA in favour of raptors as indicator species for enhanced levels of biodiversity remains inconclusive. 4. Synthesis and application. Ecologists should pay greater attention to sampling design and analysis in animal population estimation. Species richness estimation means sampling a community. Samples should be representative for the
Determining the relative importance of soil sample locations to predict risk of child lead exposure.
Zahran, Sammy; Mielke, Howard W; McElmurry, Shawn P; Filippelli, Gabriel M; Laidlaw, Mark A S; Taylor, Mark P
2013-10-01
Soil lead in urban neighborhoods is a known predictor of child blood lead levels. In this paper, we address the question where one ought to concentrate soil sample collection efforts to efficiently predict children at-risk for soil Pb exposure. Two extensive data sets are combined, including 5467 surface soil samples collected from 286 census tracts, and geo-referenced blood Pb data for 55,551 children in metropolitan New Orleans, USA. Random intercept least squares, random intercept logistic, and quantile regression results indicate that soils collected within 1m adjacent to residential streets most reliably predict child blood Pb outcomes in child blood Pb levels. Regression decomposition results show that residential street soils account for 39.7% of between-neighborhood explained variation, followed by busy street soils (21.97%), open space soils (20.25%), and home foundation soils (18.71%). Just as the age of housing stock is used as a statistical shortcut for child risk of exposure to lead-based paint, our results indicate that one can shortcut the characterization of child risk of exposure to neighborhood soil Pb by concentrating sampling efforts within 1m and adjacent to residential and busy streets, while significantly reducing the total costs of collection and analysis. This efficiency gain can help advance proactive upstream, preventive methods of environmental Pb discovery.
Chen, Yunjie; Roux, Benoît
2015-08-11
Molecular dynamics (MD) trajectories based on a classical equation of motion provide a straightforward, albeit somewhat inefficient approach, to explore and sample the configurational space of a complex molecular system. While a broad range of techniques can be used to accelerate and enhance the sampling efficiency of classical simulations, only algorithms that are consistent with the Boltzmann equilibrium distribution yield a proper statistical mechanical computational framework. Here, a multiscale hybrid algorithm relying simultaneously on all-atom fine-grained (FG) and coarse-grained (CG) representations of a system is designed to improve sampling efficiency by combining the strength of nonequilibrium molecular dynamics (neMD) and Metropolis Monte Carlo (MC). This CG-guided hybrid neMD-MC algorithm comprises six steps: (1) a FG configuration of an atomic system is dynamically propagated for some period of time using equilibrium MD; (2) the resulting FG configuration is mapped onto a simplified CG model; (3) the CG model is propagated for a brief time interval to yield a new CG configuration; (4) the resulting CG configuration is used as a target to guide the evolution of the FG system; (5) the FG configuration (from step 1) is driven via a nonequilibrium MD (neMD) simulation toward the CG target; (6) the resulting FG configuration at the end of the neMD trajectory is then accepted or rejected according to a Metropolis criterion before returning to step 1. A symmetric two-ends momentum reversal prescription is used for the neMD trajectories of the FG system to guarantee that the CG-guided hybrid neMD-MC algorithm obeys microscopic detailed balance and rigorously yields the equilibrium Boltzmann distribution. The enhanced sampling achieved with the method is illustrated with a model system with hindered diffusion and explicit-solvent peptide simulations. Illustrative tests indicate that the method can yield a speedup of about 80 times for the model system and up
Tian, Zhen; Li, Yongbao; Hassan-Rezaeian, Nima; Jiang, Steve B; Jia, Xun
2017-03-01
We have previously developed a GPU-based Monte Carlo (MC) dose engine on the OpenCL platform, named goMC, with a built-in analytical linear accelerator (linac) beam model. In this paper, we report our recent improvement on goMC to move it toward clinical use. First, we have adapted a previously developed automatic beam commissioning approach to our beam model. The commissioning was conducted through an optimization process, minimizing the discrepancies between calculated dose and measurement. We successfully commissioned six beam models built for Varian TrueBeam linac photon beams, including four beams of different energies (6 MV, 10 MV, 15 MV, and 18 MV) and two flattening-filter-free (FFF) beams of 6 MV and 10 MV. Second, to facilitate the use of goMC for treatment plan dose calculations, we have developed an efficient source particle sampling strategy. It uses the pre-generated fluence maps (FMs) to bias the sampling of the control point for source particles already sampled from our beam model. It could effectively reduce the number of source particles required to reach a statistical uncertainty level in the calculated dose, as compared to the conventional FM weighting method. For a head-and-neck patient treated with volumetric modulated arc therapy (VMAT), a reduction factor of ~2.8 was achieved, accelerating dose calculation from 150.9 s to 51.5 s. The overall accuracy of goMC was investigated on a VMAT prostate patient case treated with 10 MV FFF beam. 3D gamma index test was conducted to evaluate the discrepancy between our calculated dose and the dose calculated in Varian Eclipse treatment planning system. The passing rate was 99.82% for 2%/2 mm criterion and 95.71% for 1%/1 mm criterion. Our studies have demonstrated the effectiveness and feasibility of our auto-commissioning approach and new source sampling strategy for fast and accurate MC dose calculations for treatment plans.
Smith, R.L.; Harvey, R.W.; LeBlanc, D.R.
1991-01-01
Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, U.S.A. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume. A 27-fold change in bacterial abundance; a 35-fold change in frequency of dividing cells (FDC), an indicator of bacterial growth; a 23-fold change in 3H-glucose uptake, a measure of heterotrophic activity; and substantial changes in overall cell morphology were evident within a 9-m vertical interval at 250 m downgradient. The existence of these gradients argues for the need for closely spaced vertical sampling in groundwater studies because small differences in the vertical placement of a well screen can lead to incorrect conclusions about the chemical and microbiological processes within an aquifer.Vertical gradients of selected chemical constituents, bacterial populations, bacterial activity and electron acceptors were investigated for an unconfined aquifer contaminated with nitrate and organic compounds on Cape Cod, Massachusetts, USA. Fifteen-port multilevel sampling devices (MLS's) were installed within the contaminant plume at the source of the contamination, and at 250 and 2100 m downgradient from the source. Depth profiles of specific conductance and dissolved oxygen at the downgradient sites exhibited vertical gradients that were both steep and inversely related. Narrow zones (2-4 m thick) of high N2O and NH4+ concentrations were also detected within the contaminant plume
Whitaker, Thomas B; Saltsman, Joyce J; Ware, George M; Slate, Andrew B
2007-01-01
Hypoglycin A (HGA) is a toxic amino acid that is naturally produced in unripe ackee fruit. In 1973, the U.S. Food and Drug Administration (FDA) placed a worldwide import alert on ackee fruit, which banned the product from entering the United States. The FDA has considered establishing a regulatory limit for HGA and lifting the ban, which will require development of a monitoring program. The establishment of a regulatory limit for HGA requires the development of a scientifically based sampling plan to detect HGA in ackee fruit imported into the United States. Thirty-three lots of ackee fruit were sampled according to an experimental protocol in which 10 samples, i.e., ten 19 oz cans, were randomly taken from each lot and analyzed for HGA by using liquid chromatography. The total variance was partitioned into sampling and analytical variance components, which were found to be a function of the HGA concentration. Regression equations were developed to predict the total, sampling, and analytical variances as a function of HGA concentration. The observed HGA distribution among the test results for the 10 HGA samples was compared with the normal and lognormal distributions. A computer model based on the lognormal distribution was developed to predict the performance of sampling plan designs to detect HGA in ackee fruit shipments. The performance of several sampling plan designs was evaluated to demonstrate how to manipulate sample size and accept/reject limits to reduce misclassification of ackee fruit lots.
Kranz, Thorsten M; Harroch, Sheila; Manor, Orly; Lichtenberg, Pesach; Friedlander, Yechiel; Seandel, Marco; Harkavy-Friedman, Jill; Walsh-Messinger, Julie; Dolgalev, Igor; Heguy, Adriana; Chao, Moses V; Malaspina, Dolores
2015-08-01
Schizophrenia is a debilitating syndrome with high heritability. Genomic studies reveal more than a hundred genetic variants, largely nonspecific and of small effect size, and not accounting for its high heritability. De novo mutations are one mechanism whereby disease related alleles may be introduced into the population, although these have not been leveraged to explore the disease in general samples. This paper describes a framework to find high impact genes for schizophrenia. This study consists of two different datasets. First, whole exome sequencing was conducted to identify disruptive de novo mutations in 14 complete parent-offspring trios with sporadic schizophrenia from Jerusalem, which identified 5 sporadic cases with de novo gene mutations in 5 different genes (PTPRG, TGM5, SLC39A13, BTK, CDKN3). Next, targeted exome capture of these genes was conducted in 48 well-characterized, unrelated, ethnically diverse schizophrenia cases, recruited and characterized by the same research team in New York (NY sample), which demonstrated extremely rare and potentially damaging variants in three of the five genes (MAF<0.01) in 12/48 cases (25%); including PTPRG (5 cases), SCL39A13 (4 cases) and TGM5 (4 cases), a higher number than usually identified by whole exome sequencing. Cases differed in cognition and illness features based on which mutation-enriched gene they carried. Functional de novo mutations in protein-interaction domains in sporadic schizophrenia can illuminate risk genes that increase the propensity to develop schizophrenia across ethnicities.
Podjasek, Joshua O; Cook-Norris, Robert H; Richardson, Donna M; Drage, Lisa A; Davis, Mark D P
2011-01-01
Exotic woods from tropical and subtropical regions (eg, from South America, south Asia, and Africa) frequently are used occupationally and recreationally by woodworkers and hobbyists. These exotic woods more commonly provoke irritant contact dermatitis reactions, but they also can provoke allergic contact dermatitis reactions. We report three patients seen at Mayo Clinic (Rochester, MN) with allergic contact dermatitis reactions to exotic woods. Patch testing was performed and included patient-provided wood samples. Avoidance of identified allergens was recommended. For all patients, the dermatitis cleared or improved after avoidance of the identified allergens. Clinicians must be aware of the potential for allergic contact dermatitis reactions to compounds in exotic woods. Patch testing should be performed with suspected woods for diagnostic confirmation and allowance of subsequent avoidance of the allergens.
Importance of long-time simulations for rare event sampling in zinc finger proteins.
Godwin, Ryan; Gmeiner, William; Salsbury, Freddie R
2016-01-01
Molecular dynamics (MD) simulation methods have seen significant improvement since their inception in the late 1950s. Constraints of simulation size and duration that once impeded the field have lessened with the advent of better algorithms, faster processors, and parallel computing. With newer techniques and hardware available, MD simulations of more biologically relevant timescales can now sample a broader range of conformational and dynamical changes including rare events. One concern in the literature has been under which circumstances it is sufficient to perform many shorter timescale simulations and under which circumstances fewer longer simulations are necessary. Herein, our simulations of the zinc finger NEMO (2JVX) using multiple simulations of length 15, 30, 1000, and 3000 ns are analyzed to provide clarity on this point.
Chlamydophila pneumoniae diagnostics: importance of methodology in relation to timing of sampling.
Hvidsten, D; Halvorsen, D S; Berdal, B P; Gutteberg, T J
2009-01-01
The diagnostic impact of PCR-based detection was compared to single-serum IgM antibody measurement and IgG antibody seroconversion during an outbreak of Chlamydophila pneumoniae in a military community. Nasopharyngeal swabs for PCR-based detection, and serum, were obtained from 127 conscripts during the outbreak. Serum, drawn many months before the outbreak, provided the baseline antibody status. C. pneumoniae IgM and IgG antibodies were assayed using microimmunofluorescence (MIF), enzyme immunoassay (EIA) and recombinant ELISA (rELISA). Two reference standard tests were applied: (i) C. pneumoniae PCR; and (ii) assay of C. pneumoniae IgM antibodies, defined as positive if >or=2 IgM antibody assays (i.e. rELISA with MIF and/or EIA) were positive. In 33 subjects, of whom two tested negative according to IgM antibody assays and IgG seroconversion, C. pneumoniae DNA was detected by PCR. The sensitivities were 79%, 85%, 88% and 68%, respectively, and the specificities were 86%, 84%, 78% and 93%, respectively, for MIF IgM, EIA IgM, rELISA IgM and PCR. In two subjects, acute infection was diagnosed on the basis of IgG antibody seroconversion alone. The sensitivity of PCR detection was lower than that of any IgM antibody assay. This may be explained by the late sampling, or clearance of the organism following antibiotic treatment. The results of assay evaluation studies are affected not only by the choice of reference standard tests, but also by the timing of sampling for the different test principles used. On the basis of these findings, a combination of nasopharyngeal swabbing for PCR detection and specific single-serum IgM measurement is recommended in cases of acute respiratory C. pneumoniae infection.
A Modified Trap for Adult Sampling of Medically Important Flies (Insecta: Diptera)
Akbarzadeh, Kamran; Rafinejad, Javad; Nozari, Jamasb; Rassi, Yavar; Sedaghat, Mohammad Mehdi; Hosseini, Mostafa
2012-01-01
Background: Bait-trapping appears to be a generally useful method of studying fly populations. The aim of this study was to construct a new adult flytrap by some modifications in former versions and to evaluate its applicability in a subtropical zone in southern Iran. Methods: The traps were constructed with modification by adding some equipment to a polyethylene container (18× 20× 33 cm) with lid. The fresh sheep meat was used as bait. Totally 27 adult modified traps were made and tested for their efficacies to attract adult flies. The experiment was carried out in a range of different topographic areas of Fars Province during June 2010. Results: The traps were able to attract various groups of adult flies belonging to families of: Calliphoridae, Sarcophagidae, Muscidae, and Faniidae. The species of Calliphora vicina (Diptera: Calliphoridae), Sarcophaga argyrostoma (Diptera: Sarcophagidae) and Musca domestica (Diptera: Muscidae) include the majority of the flies collected by this sheep-meat baited trap. Conclusion: This adult flytrap can be recommended for routine field sampling to study diversity and population dynamics of flies where conducting of daily collection is difficult. PMID:23378969
Wollaber, Allan Benton
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Monte Carlo Simulation for Perusal and Practice.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.; Robey, Randall R.
The meaningful investigation of many problems in statistics can be solved through Monte Carlo methods. Monte Carlo studies can help solve problems that are mathematically intractable through the analysis of random samples from populations whose characteristics are known to the researcher. Using Monte Carlo simulation, the values of a statistic are…
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Examination of goods by importer; sampling; repacking; examination of merchandise by prospective purchasers. 19.8 Section 19.8 Customs Duties U.S... WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE THEREIN General Provisions § 19.8 Examination...
Monte Carlo fluorescence microtomography
NASA Astrophysics Data System (ADS)
Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge
2011-07-01
Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.
THE IMPORTANCE OF THE MAGNETIC FIELD FROM AN SMA-CSO-COMBINED SAMPLE OF STAR-FORMING REGIONS
Koch, Patrick M.; Tang, Ya-Wen; Ho, Paul T. P.; Chen, Huei-Ru Vivien; Liu, Hau-Yu Baobab; Yen, Hsi-Wei; Lai, Shih-Ping; Zhang, Qizhou; Chen, How-Huan; Ching, Tao-Chung; Girart, Josep M.; Frau, Pau; Li, Hua-Bai; Li, Zhi-Yun; Padovani, Marco; Qiu, Keping; Rao, Ramprasad
2014-12-20
Submillimeter dust polarization measurements of a sample of 50 star-forming regions, observed with the Submillimeter Array (SMA) and the Caltech Submillimeter Observatory (CSO) covering parsec-scale clouds to milliparsec-scale cores, are analyzed in order to quantify the magnetic field importance. The magnetic field misalignment δ—the local angle between magnetic field and dust emission gradient—is found to be a prime observable, revealing distinct distributions for sources where the magnetic field is preferentially aligned with or perpendicular to the source minor axis. Source-averaged misalignment angles (|δ|) fall into systematically different ranges, reflecting the different source-magnetic field configurations. Possible bimodal (|δ|) distributions are found for the separate SMA and CSO samples. Combining both samples broadens the distribution with a wide maximum peak at small (|δ|) values. Assuming the 50 sources to be representative, the prevailing source-magnetic field configuration is one that statistically prefers small magnetic field misalignments |δ|. When interpreting |δ| together with a magnetohydrodynamics force equation, as developed in the framework of the polarization-intensity gradient method, a sample-based log-linear scaling fits the magnetic field tension-to-gravity force ratio (Σ {sub B}) versus (|δ|) with (Σ {sub B}) = 0.116 · exp (0.047 · (|δ|)) ± 0.20 (mean error), providing a way to estimate the relative importance of the magnetic field, only based on measurable field misalignments |δ|. The force ratio Σ {sub B} discriminates systems that are collapsible on average ((Σ {sub B}) < 1) from other molecular clouds where the magnetic field still provides enough resistance against gravitational collapse ((Σ {sub B}) > 1). The sample-wide trend shows a transition around (|δ|) ≈ 45°. Defining an effective gravitational force ∼1 – (Σ {sub B}), the average magnetic-field-reduced star formation efficiency is at least a
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.
NASA Technical Reports Server (NTRS)
Glavin, D. P.; Conrad, P.; Dworkin, J. P.; Eigenbrode, J.; Mahaffy, P. R.
2011-01-01
The search for evidence of life on Mars and elsewhere will continue to be one of the primary goals of NASA s robotic exploration program over the next decade. NASA and ESA are currently planning a series of robotic missions to Mars with the goal of understanding its climate, resources, and potential for harboring past or present life. One key goal will be the search for chemical biomarkers including complex organic compounds important in life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, nucleobases and sugars which form the backbone of DNA and RNA, and lipids, the structural components of cell membranes. Many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1], though, their molecular characteristics may distinguish a biological source [2]. It is possible that in situ instruments may reveal such characteristics, however, return of the right sample (i.e. one with biosignatures or having a high probability of biosignatures) to Earth would allow for more intensive laboratory studies using a broad array of powerful instrumentation for bulk characterization, molecular detection, isotopic and enantiomeric compositions, and spatially resolved chemistry that may be required for confirmation of extant or extinct Martian life. Here we will discuss the current analytical capabilities and strategies for the detection of organics on the Mars Science Laboratory (MSL) using the Sample Analysis at Mars (SAM) instrument suite and how sample return missions from Mars and other targets of astrobiological interest will help advance our understanding of chemical biosignatures in the solar system.
Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W
2017-02-01
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols.
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
2010-10-20
The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
2014-01-01
Background Carefully conducted, community-based, longitudinal studies are required to gain further understanding of the nature and timing of respiratory viruses causing infections in the population. However, such studies pose unique challenges for field specimen collection, including as we have observed the appearance of mould in some nasal swab specimens. We therefore investigated the impact of sample collection quality and the presence of visible mould in samples upon respiratory virus detection by real-time polymerase chain reaction (PCR) assays. Methods Anterior nasal swab samples were collected from infants participating in an ongoing community-based, longitudinal, dynamic birth cohort study. The samples were first collected from each infant shortly after birth and weekly thereafter. They were then mailed to the laboratory where they were catalogued, stored at -80°C and later screened by PCR for 17 respiratory viruses. The quality of specimen collection was assessed by screening for human deoxyribonucleic acid (DNA) using endogenous retrovirus 3 (ERV3). The impact of ERV3 load upon respiratory virus detection and the impact of visible mould observed in a subset of swabs reaching the laboratory upon both ERV3 loads and respiratory virus detection was determined. Results In total, 4933 nasal swabs were received in the laboratory. ERV3 load in nasal swabs was associated with respiratory virus detection. Reduced respiratory virus detection (odds ratio 0.35; 95% confidence interval 0.27-0.44) was observed in samples where the ERV3 could not be identified. Mould was associated with increased time of samples reaching the laboratory and reduced ERV3 loads and respiratory virus detection. Conclusion Suboptimal sample collection and high levels of visible mould can impact negatively upon sample quality. Quality control measures, including monitoring human DNA loads using ERV3 as a marker for epithelial cell components in samples should be undertaken to optimize the
Phuc, Pham Van; Ngoc, Vu Bich; Lam, Dang Hoang; Tam, Nguyen Thanh; Viet, Pham Quoc; Ngoc, Phan Kim
2012-06-01
It is known that umbilical cord blood (UCB) is a rich source of stem cells with practical and ethical advantages. Three important types of stem cells which can be harvested from umbilical cord blood and used in disease treatment are hematopoietic stem cells (HSCs), mesenchymal stem cells (MSCs) and endothelial progenitor cells (EPCs). Since these stem cells have shown enormous potential in regenerative medicine, numerous umbilical cord blood banks have been established. In this study, we examined the ability of banked UCB collected to produce three types of stem cells from the same samples with characteristics of HSCs, MSCs and EPCs. We were able to obtain homogeneous plastic rapidly-adherent cells (with characteristics of MSCs), slowly-adherent (with characteristics of EPCs) and non-adherent cells (with characteristics of HSCs) from the mononuclear cell fractions of cryopreserved UCB. Using a protocol of 48 h supernatant transferring, we successfully isolated MSCs which expressed CD13, CD44 and CD90 while CD34, CD45 and CD133 negative, had typical fibroblast-like shape, and was able to differentiate into adipocytes; EPCs which were CD34, and CD90 positive, CD13, CD44, CD45 and CD133 negative, adherent with cobble-like shape; HSCs which formed colonies when cultured in MethoCult medium.
Stockman, Jamila K; Campbell, Jacquelyn C; Celentano, David D
2009-01-01
Objectives Recent evidence suggests that it is important to consider behavioral-specific sexual violence measures in assessing women’s risk behaviors. This study investigated associations of history and types of sexual coercion on HIV risk behaviors in a nationally representative sample of heterosexually active American women. Methods Analyses were based on 5,857 women aged 18–44 participating in the 2002 National Survey of Family Growth. Types of lifetime sexual coercion included: victim given alcohol or drugs, verbally pressured, threatened with physical injury, and physically injured. Associations with HIV risk behaviors were assessed using logistic regression. Results Of 5,857 heterosexually active women, 16.4% reported multiple sex partners and 15.3% reported substance abuse. A coerced first sexual intercourse experience and coerced sex after sexual debut were independently associated with multiple sex partners and substance abuse; the highest risk was observed for women reporting a coerced first sexual intercourse experience. Among types of sexual coercion, alcohol or drug use at coerced sex was independently associated with multiple sex partners and substance abuse. Conclusions Our findings suggest that public health strategies are needed to address the violent components of heterosexual relationships. Future research should utilize longitudinal and qualitative research to characterize the relationship between continuums of sexual coercion and HIV risk. PMID:19734802
Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...
NASA Astrophysics Data System (ADS)
Savoye, S.; Michelot, J.-L.; Matray, J.-M.; Wittebroodt, Ch.; Mifsud, A.
2012-02-01
Argillaceous formations are thought to be suitable natural barriers to the release of radionuclides from a radioactive waste repository. However, the safety assessment of a waste repository hosted by an argillaceous rock requires knowledge of several properties of the host rock such as the hydraulic conductivity, diffusion properties and the pore water composition. This paper presents an experimental design that allows the determination of these three types of parameters on the same cylindrical rock sample. The reliability of this method was evaluated using a core sample from a well-investigated indurated argillaceous formation, the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL) (Switzerland). In this test, deuterium- and oxygen-18-depleted water, bromide and caesium were injected as tracer pulses in a reservoir drilled in the centre of a cylindrical core sample. The evolution of these tracers was monitored by means of samplers included in a circulation circuit for a period of 204 days. Then, a hydraulic test (pulse-test type) was performed. Finally, the core sample was dismantled and analysed to determine tracer profiles. Diffusion parameters determined for the four tracers are consistent with those previously obtained from laboratory through-diffusion and in-situ diffusion experiments. The reconstructed initial pore-water composition (chloride and water stable-isotope concentrations) was also consistent with those previously reported. In addition, the hydraulic test led to an estimate of hydraulic conductivity in good agreement with that obtained from in-situ tests.
Savoye, S; Michelot, J-L; Matray, J-M; Wittebroodt, Ch; Mifsud, A
2012-02-01
Argillaceous formations are thought to be suitable natural barriers to the release of radionuclides from a radioactive waste repository. However, the safety assessment of a waste repository hosted by an argillaceous rock requires knowledge of several properties of the host rock such as the hydraulic conductivity, diffusion properties and the pore water composition. This paper presents an experimental design that allows the determination of these three types of parameters on the same cylindrical rock sample. The reliability of this method was evaluated using a core sample from a well-investigated indurated argillaceous formation, the Opalinus Clay from the Mont Terri Underground Research Laboratory (URL) (Switzerland). In this test, deuterium- and oxygen-18-depleted water, bromide and caesium were injected as tracer pulses in a reservoir drilled in the centre of a cylindrical core sample. The evolution of these tracers was monitored by means of samplers included in a circulation circuit for a period of 204 days. Then, a hydraulic test (pulse-test type) was performed. Finally, the core sample was dismantled and analysed to determine tracer profiles. Diffusion parameters determined for the four tracers are consistent with those previously obtained from laboratory through-diffusion and in-situ diffusion experiments. The reconstructed initial pore-water composition (chloride and water stable-isotope concentrations) was also consistent with those previously reported. In addition, the hydraulic test led to an estimate of hydraulic conductivity in good agreement with that obtained from in-situ tests.
ERIC Educational Resources Information Center
Osborne, Jason W.
2011-01-01
Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found "not" to have modeled…
Kalos, M.
2006-05-09
The Monte Carlo example programs VARHATOM and DMCATOM are two small, simple FORTRAN programs that illustrate the use of the Monte Carlo Mathematical technique for calculating the ground state energy of the hydrogen atom.
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. 80... requirements for producers and importers of denatured fuel ethanol and other oxygenates for use by oxygenate blenders. Beginning January 1, 2017, producers and importers of denatured fuel ethanol (DFE) and...
Nakamura, Hideaki; Aniya, Masaru
2006-03-01
The density of states of Ag(2)O--B(2)O(3) glasses has been calculated by using a modified scale-transformed energy space sampling algorithm. This algorithm combines the scale-transformed energy space sampling algorithm and the Wang-Landau method. It is shown how the two algorithms can be combined to improve the efficiency of calculation. The thermodynamic properties, in particular the specific heat C(V), of the above-mentioned glass system is studied. At temperatures above 80 K, the value of specific heat C(v) is close to 22 J/mol/K. At low temperatures, the deviations of C(v) from a T(3) behavior are discernible, that is, C(v)/T(3) exhibits a hump at T = 7 K, which is in good agreement with the reported experimental behavior.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.
Hartman, D; Benton, L; Morenos, L; Beyer, J; Spiden, M; Stock, A
2011-02-25
The identification of disaster victims through the use of DNA analysis is an integral part of any Disaster Victim Identification (DVI) response, regardless of the scale and nature of the disaster. As part of the DVI response to the 2009 Victorian Bushfires Disaster, DNA analysis was performed to assist in the identification of victims through kinship (familial matching to relatives) or direct (self source sample) matching of DNA profiles. Although most of the DNA identifications achieved were to reference samples from relatives, there were a number of DNA identifications (12) made through direct matching. Guthrie cards, which have been collected in Australia over the past 30 years, were used to provide direct reference samples. Of the 236 ante-mortem (AM) samples received, 21 were Guthrie cards and one was a biopsy specimen; all yielding complete DNA profiles when genotyped. This publication describes the use of such Biobanks and medical specimens as a sample source for the recovery of good quality DNA for comparisons to post-mortem (PM) samples.
Extra Chance Generalized Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; Sanz-Serna, J. M.
2015-01-01
We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.
Code of Federal Regulations, 2014 CFR
2014-07-01
... producers and importers of denaturant designated as suitable for the manufacture of denatured fuel ethanol... suitable for the manufacture of denatured fuel ethanol meeting federal quality requirements. Beginning January 1, 2017, or on the first day that any producer or importer of ethanol denaturant designates...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods w...
CosmoMC: Cosmological MonteCarlo
NASA Astrophysics Data System (ADS)
Lewis, Antony; Bridle, Sarah
2011-06-01
We present a fast Markov Chain Monte-Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent CMB experiments and provide parameter constraints, including sigma_8, from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m_nu < 0.3eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendices we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters.
Dillon, James C K; Bezerra, Leonardo; Del Pilar Sosa Peña, María; Neu-Baker, Nicole M; Brenner, Sara A
2017-01-31
Hyperspectral imaging (HSI) and mapping are increasingly used for visualization and identification of nanoparticles (NPs) in a variety of matrices, including aqueous suspensions and biological samples. Reference spectral libraries (RSLs) contain hyperspectral data collected from materials of known composition and are used to detect the known materials in experimental samples through a one-to-one pixel "mapping" process. In some HSI studies, RSLs created from raw NPs were used to map NPs in experimental samples in a different matrix; for example, RSLs created from NPs in suspension to map NPs in biological tissue. Others have utilized RSLs created from NPs in the same matrix. However, few studies have systematically compared hyperspectral data as a function of the matrix in which the NPs are found and its impact on mapping results. The objective of this study is to compare RSLs created from metal oxide NPs in aqueous suspensions to RSLs created from the same NPs in rat tissues following in vivo inhalation exposure, and to investigate the differences in mapping that result from the use of each RSL. Results demonstrate that the spectral profiles of these NPs are matrix dependent: RSLs created from NPs in positive control tissues mapped to experimental tissues more appropriately than RSLs created from NPs in suspension. Aqueous suspension RSLs mapped 0-602 out of 500,424 pixels per tissue image while tissue RSLs mapped 689-18,435 pixels for the same images. This study underscores the need for appropriate positive controls for the creation of RSLs for mapping NPs in experimental samples.
Use of single scatter electron monte carlo transport for medical radiation sciences
Svatos, Michelle M.
2001-01-01
The single scatter Monte Carlo code CREEP models precise microscopic interactions of electrons with matter to enhance physical understanding of radiation sciences. It is designed to simulate electrons in any medium, including materials important for biological studies. It simulates each interaction individually by sampling from a library which contains accurate information over a broad range of energies.
Technology Transfer Automated Retrieval System (TEKTRAN)
Hypoglycin A (HGA) is a toxic amino acid that is naturally produced in unripe ackee fruit. In 1973 the FDA placed a worldwide import alert on ackee fruit, which banned the product from entering the U.S. The FDA has considered establishing a regulatory limit for HGA and lifting the ban, which will re...
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h_{L}. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels ${\\infty}$ >h_{0}>h_{1 }...>h_{L}. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h0>h1 ...>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Womersley, J. . Dept. of Physics)
1992-10-01
The D0 detector at the Fermilab Tevatron began its first data taking run in May 1992. For analysis of the expected 25 pb[sup [minus]1] data sample, roughly half a million simulated events will be needed. The GEANT-based Monte Carlo program used to generate these events is described, together with comparisons to test beam data. Some novel techniques used to speed up execution and simplify geometrical input are described.
Observations on variational and projector Monte Carlo methods.
Umrigar, C J
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
On a full Monte Carlo approach to quantum mechanics
NASA Astrophysics Data System (ADS)
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Chamorro-Premuzic, Tomas; Reimers, Stian; Hsu, Anne; Ahmetoglu, Gorkan
2009-08-01
The present study examined individual differences in artistic preferences in a sample of 91,692 participants (60% women and 40% men), aged 13-90 years. Participants completed a Big Five personality inventory (Goldberg, 1999) and provided preference ratings for 24 different paintings corresponding to cubism, renaissance, impressionism, and Japanese art, which loaded on to a latent factor of overall art preferences. As expected, the personality trait openness to experience was the strongest and only consistent personality correlate of artistic preferences, affecting both overall and specific preferences, as well as visits to galleries, and artistic (rather than scientific) self-perception. Overall preferences were also positively influenced by age and visits to art galleries, and to a lesser degree, by artistic self-perception and conscientiousness (negatively). As for specific styles, after overall preferences were accounted for, more agreeable, more conscientious and less open individuals reported higher preference levels for impressionist, younger and more extraverted participants showed higher levels of preference for cubism (as did males), and younger participants, as well as males, reported higher levels of preferences for renaissance. Limitations and recommendations for future research are discussed.
Peterson, A Townsend; Moses, Lina M; Bausch, Daniel G
2014-01-01
Lassa fever is a disease that has been reported from sites across West Africa; it is caused by an arenavirus that is hosted by the rodent M. natalensis. Although it is confined to West Africa, and has been documented in detail in some well-studied areas, the details of the distribution of risk of Lassa virus infection remain poorly known at the level of the broader region. In this paper, we explored the effects of certainty of diagnosis, oversampling in well-studied region, and error balance on results of mapping exercises. Each of the three factors assessed in this study had clear and consistent influences on model results, overestimating risk in southern, humid zones in West Africa, and underestimating risk in drier and more northern areas. The final, adjusted risk map indicates broad risk areas across much of West Africa. Although risk maps are increasingly easy to develop from disease occurrence data and raster data sets summarizing aspects of environments and landscapes, this process is highly sensitive to issues of data quality, sampling design, and design of analysis, with macrogeographic implications of each of these issues and the potential for misrepresenting real patterns of risk.
Papadopoulos, Costas; Frontistis, Zacharias; Antonopoulou, Maria; Venieri, Danae; Konstantinou, Ioannis; Mantzavinos, Dionissios
2016-07-01
The sonochemical degradation of ethyl paraben (EP), a representative of the parabens family, was investigated. Experiments were conducted at constant ultrasound frequency of 20 kHz and liquid bulk temperature of 30 °C in the following range of experimental conditions: EP concentration 250-1250 μg/L, ultrasound (US) density 20-60 W/L, reaction time up to 120 min, initial pH 3-8 and sodium persulfate 0-100mg/L, either in ultrapure water or secondary treated wastewater. A factorial design methodology was adopted to elucidate the statistically important effects and their interactions and a full empirical model comprising seventeen terms was originally developed. Omitting several terms of lower significance, a reduced model that can reliably simulate the process was finally proposed; this includes EP concentration, reaction time, power density and initial pH, as well as the interactions (EP concentration)×(US density), (EP concentration)×(pHo) and (EP concentration)×(time). Experiments at an increased EP concentration of 3.5mg/L were also performed to identify degradation by-products. LC-TOF-MS analysis revealed that EP sonochemical degradation occurs through dealkylation of the ethyl chain to form methyl paraben, while successive hydroxylation of the aromatic ring yields 4-hydroxybenzoic, 2,4-dihydroxybenzoic and 3,4-dihydroxybenzoic acids. By-products are less toxic to bacterium V. fischeri than the parent compound.
Card, Roderick; Vaughan, Kelly; Bagnall, Mary; Spiropoulos, John; Cooley, William; Strickland, Tony; Davies, Rob; Anjum, Muna F.
2016-01-01
Salmonella enterica is a foodborne zoonotic pathogen of significant public health concern. We have characterized the virulence and antimicrobial resistance gene content of 95 Salmonella isolates from 11 serovars by DNA microarray recovered from UK livestock or imported meat. Genes encoding resistance to sulphonamides (sul1, sul2), tetracycline [tet(A), tet(B)], streptomycin (strA, strB), aminoglycoside (aadA1, aadA2), beta-lactam (blaTEM), and trimethoprim (dfrA17) were common. Virulence gene content differed between serovars; S. Typhimurium formed two subclades based on virulence plasmid presence. Thirteen isolates were selected by their virulence profile for pathotyping using the Galleria mellonella pathogenesis model. Infection with a chicken invasive S. Enteritidis or S. Gallinarum isolate, a multidrug resistant S. Kentucky, or a S. Typhimurium DT104 isolate resulted in high mortality of the larvae; notably presence of the virulence plasmid in S. Typhimurium was not associated with increased larvae mortality. Histopathological examination showed that infection caused severe damage to the Galleria gut structure. Enumeration of intracellular bacteria in the larvae 24 h post-infection showed increases of up to 7 log above the initial inoculum and transmission electron microscopy (TEM) showed bacterial replication in the haemolymph. TEM also revealed the presence of vacuoles containing bacteria in the haemocytes, similar to Salmonella containing vacuoles observed in mammalian macrophages; although there was no evidence from our work of bacterial replication within vacuoles. This work shows that microarrays can be used for rapid virulence genotyping of S. enterica and that the Galleria animal model replicates some aspects of Salmonella infection in mammals. These procedures can be used to help inform on the pathogenicity of isolates that may be antibiotic resistant and have scope to aid the assessment of their potential public and animal health risk. PMID:27199965
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M.S.; Ricketson, L.F.; Dimits, A.M.; Caflisch, R.E.; Cohen, B.I.
2014-10-01
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε, the computational cost of the method is O(ε{sup −2}) or O(ε{sup −2}(lnε){sup 2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε{sup −3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10{sup −5}. We discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε–2) or (ε–2(lnε)2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε–3) for direct simulation Monte Carlo or binary collision methods.more » We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10–5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; Caflisch, R. E.; Cohen, B. I.
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε^{–2}) or (ε^{–2}(lnε)^{2}), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε^{–3}) for direct simulation Monte Carlo or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10^{–5}. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.
Fast Monte Carlo for radiation therapy: the PEREGRINE Project
Hartmann Siantar, C.L.; Bergstrom, P.M.; Chandler, W.P.; Cox, L.J.; Daly, T.P.; Garrett, D.; House, R.K.; Moses, E.I.; Powell, C.L.; Patterson, R.W.; Schach von Wittenau, A.E.
1997-11-11
The purpose of the PEREGRINE program is to bring high-speed, high- accuracy, high-resolution Monte Carlo dose calculations to the desktop in the radiation therapy clinic. PEREGRINE is a three- dimensional Monte Carlo dose calculation system designed specifically for radiation therapy planning. It provides dose distributions from external beams of photons, electrons, neutrons, and protons as well as from brachytherapy sources. Each external radiation source particle passes through collimator jaws and beam modifiers such as blocks, compensators, and wedges that are used to customize the treatment to maximize the dose to the tumor. Absorbed dose is tallied in the patient or phantom as Monte Carlo simulation particles are followed through a Cartesian transport mesh that has been manually specified or determined from a CT scan of the patient. This paper describes PEREGRINE capabilities, results of benchmark comparisons, calculation times and performance, and the significance of Monte Carlo calculations for photon teletherapy. PEREGRINE results show excellent agreement with a comprehensive set of measurements for a wide variety of clinical photon beam geometries, on both homogeneous and heterogeneous test samples or phantoms. PEREGRINE is capable of calculating >350 million histories per hour for a standard clinical treatment plan. This results in a dose distribution with voxel standard deviations of <2% of the maximum dose on 4 million voxels with 1 mm resolution in the CT-slice plane in under 20 minutes. Calculation times include tracking particles through all patient specific beam delivery components as well as the patient. Most importantly, comparison of Monte Carlo dose calculations with currently-used algorithms reveal significantly different dose distributions for a wide variety of treatment sites, due to the complex 3-D effects of missing tissue, tissue heterogeneities, and accurate modeling of the radiation source.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Tang, Ke; Wong, Samuel W.K.; Liu, Jun S.; Zhang, Jinfeng; Liang, Jie
2015-01-01
Motivation: Loops in proteins are often involved in biochemical functions. Their irregularity and flexibility make experimental structure determination and computational modeling challenging. Most current loop modeling methods focus on modeling single loops. In protein structure prediction, multiple loops often need to be modeled simultaneously. As interactions among loops in spatial proximity can be rather complex, sampling the conformations of multiple interacting loops is a challenging task. Results: In this study, we report a new method called multi-loop Distance-guided Sequential chain-Growth Monte Carlo (M-DiSGro) for prediction of the conformations of multiple interacting loops in proteins. Our method achieves an average RMSD of 1.93 Å for lowest energy conformations of 36 pairs of interacting protein loops with the total length ranging from 12 to 24 residues. We further constructed a data set containing proteins with 2, 3 and 4 interacting loops. For the most challenging target proteins with four loops, the average RMSD of the lowest energy conformations is 2.35 Å. Our method is also tested for predicting multiple loops in β-barrel membrane proteins. For outer-membrane protein G, the lowest energy conformation has a RMSD of 2.62 Å for the three extracellular interacting loops with a total length of 34 residues (12, 12 and 10 residues in each loop). Availability and implementation: The software is freely available at: tanto.bioe.uic.edu/m-DiSGro. Contact: jinfeng@stat.fsu.edu or jliang@uic.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25861965
Interaction picture density matrix quantum Monte Carlo
Malone, Fionn D. Lee, D. K. K.; Foulkes, W. M. C.; Blunt, N. S.; Shepherd, James J.; Spencer, J. S.
2015-07-28
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible.
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
Barfi, Azadeh; Nazem, Habibollah; Saeidi, Iman; Peyrovi, Moazameh; Afsharzadeh, Maryam; Barfi, Behruz; Salavati, Hossein
2016-03-20
In the present study, an efficient and environmental friendly method (called in-syringe reversed dispersive liquid-liquid microextraction (IS-R-DLLME)) was developed to extract three important components (i.e. para-anisaldehyde, trans-anethole and its isomer estragole) simultaneously in different plant extracts (basil, fennel and tarragon), human plasma and urine samples prior their determination using high-performance liquid chromatography. The importance of choosing these plant extracts as samples is emanating from the dual roles of their bioactive compounds (trans-anethole and estragole), which can alter positively or negatively different cellular processes, and necessity to a simple and efficient method for extraction and sensitive determination of these compounds in the mentioned samples. Under the optimum conditions (including extraction solvent: 120 μL of n-octanol; dispersive solvent: 600 μL of acetone; collecting solvent: 1000 μL of acetone, sample pH 3; with no salt), limits of detection (LODs), linear dynamic ranges (LDRs) and recoveries (R) were 79-81 ng mL(-1), 0.26-6.9 μg mL(-1) and 94.1-99.9%, respectively. The obtained results showed that the IS-R-DLLME was a simple, fast and sensitive method with low level consumption of extraction solvent which provides high recovery under the optimum conditions. The present method was applied to investigate the absorption amounts of the mentioned analytes through the determination of the analytes before (in the plant extracts) and after (in the human plasma and urine samples) the consumption which can determine the toxicity levels of the analytes (on the basis of their dosages) in the extracts.
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
Eriksson, Andreas; Giske, Christian G; Ternhag, Anders
2013-01-01
To determine the distribution of urinary tract pathogens with focus on Staphylococcus saprophyticus and analyse the seasonality, antibiotic susceptibility, and gender and age distributions in a large Swedish cohort. S. saprophyticus is considered an important causative agent of urinary tract infection (UTI) in young women, and some earlier studies have reported up to approximately 40% of UTIs in this patient group being caused by S. saprophyticus. We hypothesized that this may be true only in very specific outpatient settings. During the year 2010, 113,720 urine samples were sent for culture to the Karolinska University Hospital, from both clinics in the hospital and from primary care units. Patient age, gender and month of sampling were analysed for S. saprophyticus, Escherichia coli, Klebsiella pneumoniae and Proteus mirabilis. Species data were obtained for 42,633 (37%) of the urine samples. The most common pathogens were E. coli (57.0%), Enterococcus faecalis (6.5%), K. pneumoniae (5.9%), group B streptococci (5.7%), P. mirabilis (3.0%) and S. saprophyticus (1.8%). The majority of subjects with S. saprophyticus were women 15-29 years of age (63.8%). In this age group, S. saprophyticus constituted 12.5% of all urinary tract pathogens. S. saprophyticus is a common urinary tract pathogen in young women, but its relative importance is low compared with E. coli even in this patient group. For women in other ages and for men, growth of S. saprophyticus is a quite uncommon finding.
Avendaño, Jorge Enrique; Arbeláez-Cortés, Enrique; Cadena, Carlos Daniel
2017-03-24
Phylogeographic studies seeking to describe biogeographic patterns, infer evolutionary processes, and revise species-level classification should properly characterize the distribution ranges of study species, and thoroughly sample genetic variation across taxa and geography. This is particularly necessary for widely distributed organisms occurring in complex landscapes, such as the Neotropical region. Here, we clarify the geographic range and revisit the phylogeography of the Black-billed Thrush (Turdus ignobilis), a common passerine bird from lowland tropical South America, whose evolutionary relationships and species limits were recently evaluated employing phylogeographic analyses based on partial knowledge of its distribution and incomplete sampling of populations. Our work employing mitochondrial and nuclear DNA sequences sampled all named subspecies and multiple populations across northern South America, and uncovered patterns not apparent in earlier work, including a biogeographic interplay between the Amazon and Orinoco basins and the occurrence of distinct lineages with seemingly different habitat affinities in regional sympatry in the Colombian Amazon. In addition, we found that previous inferences about the affinities and taxonomic status of Andean populations assumed to be allied to populations from the Pantepui region were incorrect, implying that inferred biogeographic and taxonomic scenarios need re-evaluation. We propose a new taxonomic treatment, which recognizes two distinct biological species in the group. Our findings illustrate the importance of sufficient taxon and geographic sampling to reconstruct evolutionary history and to evaluate species limits among Neotropical organisms. Considering the scope of the questions asked, advances in Neotropical phylogeography will often require substantial cross-country scientific collaboration.
Dåderman, Anna Maria; Strindlund, Hans; Wiklund, Nils; Fredriksen, Svend-Otto; Lidberg, Lars
2003-10-14
The sedative-hypnotic benzodiazepine flunitrazepam (FZ) is abused worldwide. The purpose of our study was to investigate violence and anterograde amnesia following intoxication with FZ, and how this was legally evaluated in forensic psychiatric investigations with the objective of drawing some conclusions about the importance of urine sample in a case of a suspected intoxication with FZ. The case was a 23-year-old male university student who, intoxicated with FZ (and possibly with other substances such as diazepam, amphetamines or cannabis), first stabbed an acquaintance and, 2 years later, two friends to death. The police investigation files, including video-typed interviews, the forensic psychiatric files, and also results from the forensic autopsy of the victims, were compared with the information obtained from the case. Only partial recovery from anterograde amnesia was shown during a period of several months. Some important new information is contained in this case report: a forensic analysis of blood sample instead of a urine sample, might lead to confusion during police investigation and forensic psychiatric assessment (FPA) of an FZ abuser, and in consequence wrong legal decisions. FZ, alone or combined with other substances, induces severe violence and is followed by anterograde amnesia. All cases of bizarre, unexpected aggression followed by anterograde amnesia should be assessed for abuse of FZ. A urine sample is needed in case of suspected FZ intoxication. The police need to be more aware of these issues, and they must recognise that they play a crucial role in an assessment procedure. Declaring FZ an illegal drug is strongly recommended.
Monte Carlo studies of uranium calorimetry
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.
Quantum Monte Carlo Endstation for Petascale Computing
Lubos Mitas
2011-01-26
published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.
Study on phase function in Monte Carlo transmission characteristics of poly-disperse aerosol
NASA Astrophysics Data System (ADS)
Bai, Lu; Wu, Zhen-Sen; Tang, Shuang-Qing; Li, Ming; Xie, Pin-Hua; Wang, Shi-Mei
2011-01-01
Henyey-Greenstein (H-G) phase function is typically used as an approximation to Mie phase function and its shortcomings have been discussed in numerous papers. But the judicious criterion of when the H-G phase function would be valid is still ambiguous. In this paper, we use the direct sample phase function method in transmittance calculation. A comparison of the direct sample phase function method and the H-G phase function is presented. The percentage of the multiple scattering in Monte Carlo transfer computations is discussed. Numerical results showed that using H-G phase function led to underestimating the transmittance. The deflection of root means square error can be used as a criterion. Although the exact calculation of sample phase function requires slightly more computation time, the rigorous phase function simulation method has an important role in the Monte Carlo radiative transfer computation problems.
Monte Carlo inversion of seismic data
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The analytic solution to the linear inverse problem provides estimates of the uncertainty of the solution in terms of standard deviations of corrections to a particular solution, resolution of parameter adjustments, and information distribution among the observations. It is shown that Monte Carlo inversion, when properly executed, can provide all the same kinds of information for nonlinear problems. Proper execution requires a relatively uniform sampling of all possible models. The expense of performing Monte Carlo inversion generally requires strategies to improve the probability of finding passing models. Such strategies can lead to a very strong bias in the distribution of models examined unless great care is taken in their application.
Monte Carlo Reliability Analysis.
1987-10-01
to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction
MORSE Monte Carlo shielding calculations for the zirconium hydride reference reactor
NASA Technical Reports Server (NTRS)
Burgart, C. E.
1972-01-01
Verification of DOT-SPACETRAN transport calculations of a lithium hydride and tungsten shield for a SNAP reactor was performed using the MORSE (Monte Carlo) code. Transport of both neutrons and gamma rays was considered. Importance sampling was utilized in the MORSE calculations. Several quantities internal to the shield, as well as dose at several points outside of the configuration, were in satisfactory agreement with the DOT calculations of the same.
Hermida, Ramón C; Ayala, Diana E; Fontao, María J; Mojón, Artemio; Fernández, José R
2013-03-01
estimated asleep SBP mean, the most significant prognostic marker of CVD events, in the range of -21.4 to +23.9 mm Hg. Cox proportional-hazard analyses adjusted for sex, age, diabetes, anemia, and chronic kidney disease revealed comparable hazard ratios (HRs) for mean BP values and sleep-time relative BP decline derived from the original complete 48-h ABPM profiles and those modified to simulate a sampling rate of one BP measurement every 1 or 2 h. The HRs, however, were markedly overestimated for SBP and underestimated for DBP when the duration of ABPM was reduced from 48 to only 24 h. This study on subjects evaluated prospectively by 48-h ABPM documents that reproducibility in the estimates of prognostic ABPM-derived parameters depends markedly on duration of monitoring, and only to a lesser extent on sampling rate. The HR of CVD events associated with increased ambulatory BP is poorly estimated by relying on 24-h ABPM, indicating ABPM for only 24 h may be insufficient for proper diagnosis of hypertension, identification of dipping status, evaluation of treatment efficacy, and, most important, CVD risk stratification.
NASA Astrophysics Data System (ADS)
Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen
2016-10-01
Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.
Single scatter electron Monte Carlo
Svatos, M.M.
1997-03-01
A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.
NASA Astrophysics Data System (ADS)
Glavin, D. P.; Brinckerhoff, W. B.; Conrad, P. G.; Dworkin, J. P.; Eigenbrode, J. L.; Getty, S.; Mahaffy, P. R.
2013-12-01
The search for evidence of life on Mars and elsewhere will continue to be one of the primary goals of NASA's robotic exploration program for decades to come. NASA and ESA are currently planning a series of robotic missions to Mars with the goal of understanding its climate, resources, and potential for harboring past or present life. One key goal will be the search for chemical biomarkers including organic compounds important in life on Earth and their geological forms. These compounds include amino acids, the monomer building blocks of proteins and enzymes, nucleobases and sugars which form the backbone of DNA and RNA, and lipids, the structural components of cell membranes. Many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1], though, their molecular characteristics may distinguish a biological source [2]. It is possible that in situ instruments may reveal such characteristics, however, return of the right samples to Earth (i.e. samples containing chemical biosignatures or having a high probability of biosignature preservation) would enable more intensive laboratory studies using a broad array of powerful instrumentation for bulk characterization, molecular detection, isotopic and enantiomeric compositions, and spatially resolved chemistry that may be required for confirmation of extant or extinct life on Mars or elsewhere. In this presentation we will review the current in situ analytical capabilities and strategies for the detection of organics on the Mars Science Laboratory (MSL) rover using the Sample Analysis at Mars (SAM) instrument suite [3] and discuss how both future advanced in situ instrumentation [4] and laboratory measurements of samples returned from Mars and other targets of astrobiological interest including the icy moons of Jupiter and Saturn will help advance our understanding of chemical biosignatures in the Solar System. References: [1] Cronin, J. R and Chang S. (1993
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
NASA Technical Reports Server (NTRS)
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Monte Carlo eikonal scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Dedonder, J. P.
2012-08-01
Background: The eikonal approximation is commonly used to calculate heavy-ion elastic scattering. However, the full evaluation has only been done (without the use of Monte Carlo techniques or additional approximations) for α-α scattering.Purpose: Develop, improve, and test the Monte Carlo eikonal method for elastic scattering over a wide range of nuclei, energies, and angles.Method: Monte Carlo evaluation is used to calculate heavy-ion elastic scattering for heavy nuclei including the center-of-mass correction introduced in this paper and the Coulomb interaction in terms of a partial-wave expansion. A technique for the efficient expansion of the Glauber amplitude in partial waves is developed.Results: Angular distributions are presented for a number of nuclear pairs over a wide energy range using nucleon-nucleon scattering parameters taken from phase-shift analyses and densities from independent sources. We present the first calculations of the Glauber amplitude, without further approximation, and with realistic densities for nuclei heavier than helium. These densities respect the center-of-mass constraints. The Coulomb interaction is included in these calculations.Conclusion: The center-of-mass and Coulomb corrections are essential. Angular distributions can be predicted only up to certain critical angles which vary with the nuclear pairs and the energy, but we point out that all critical angles correspond to a momentum transfer near 1 fm-1.
Helton, J C; Shiver, A W
1996-02-01
A Monte Carlo procedure for the construction of complementary cumulative distribution functions (CCDFs) for comparison with the U.S. Environmental Protection Agency (EPA) release limits for radioactive waste disposal (40 CFR 191, Subpart B) is described and illustrated with results from a recent performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP). The Monte Carlo procedure produces CCDF estimates similar to those obtained with importance sampling in several recent PAs for the WIPP. The advantages of the Monte Carlo procedure over importance sampling include increased resolution in the calculation of probabilities for complex scenarios involving drilling intrusions and better use of the necessarily limited number of mechanistic calculations that underlie CCDF construction.
A palaeomagnetic study of Apennine thrusts, Italy: Monte Maiella and Monte Raparo
NASA Astrophysics Data System (ADS)
Jackson, K. C.
1990-06-01
Three separate structural blocks within the southern Apennines have been sampled for palaeomagnetic investigation to constrain their original separation and movement during mid-Tertiary deformation. The Mesozoic limestones are weakly magnetic, and the NRM intensity of all samples from Upper and Lower Cretaceous limestone from the Alburni platform and from Upper Cretaceous limestone at Monte Maiella were too low to yield results. Lower Cretaceous limestone at Monte Maiella contained a mean magnetisation (after structural correction) of D = 326°, I = +42°, k = 44, N = 9 (57°N, 263°E); and Cretaceous (?) limestone at Monte Raparo a mean of D = 132°, I = -61°, k = 50, N = 19 (54° N, 306°E). The Monte Maiella results, near the central part of the Apennine thrust-front, are compatible with a local, clockwise block-rotation during deformation, while Monte Raparo results may bear evidence of the major east-west thrust-motion during shortening in addition to anticlockwise block-rotations already reported from the southernmost Apennines.
NASA Astrophysics Data System (ADS)
Velazquez, L.; Castro-Palacio, J. C.
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
Overy, Catherine; Booth, George H; Blunt, N S; Shepherd, James J; Cleland, Deidre; Alavi, Ali
2014-12-28
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems.
Velazquez, L; Castro-Palacio, J C
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Folk, R.L.; Lynch, F.L.
1997-05-01
Bacterial textures are present on clay minerals in Oligocene Frio Formation sandstones from the subsurface of the Corpus Christi area, Texas. In shallower samples, beads 0.05--0.1 {micro}m in diameter rim the clay flakes; at greater depth these beads become more abundant and eventually are perched on the ends of clay filaments of the same diameter. The authors believe that the beads are nannobacteria (dwarf forms) that have precipitated or transformed the clay minerals during burial of the sediments. Rosettes of chlorite also contain, after HCl etching, rows of 0.1 {micro}m bodies. In contrast, kaolinite shows no evidence of bacterial precipitation. The authors review other examples of bacterially precipitated clay minerals. A danger present in interpretation of earlier work (and much work of others) is the development of nannobacteria-looking artifacts caused by gold coating times in excess of one minute; the authors strongly recommend a 30-second coating time. Bacterial growth of clay minerals may be a very important process both in the surface and subsurface.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
2015-01-01
criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067
Monte Carlo methods in genetic analysis
Lin, Shili
1996-12-31
Many genetic analyses require computation of probabilities and likelihoods of pedigree data. With more and more genetic marker data deriving from new DNA technologies becoming available to researchers, exact computations are often formidable with standard statistical methods and computational algorithms. The desire to utilize as much available data as possible, coupled with complexities of realistic genetic models, push traditional approaches to their limits. These methods encounter severe methodological and computational challenges, even with the aid of advanced computing technology. Monte Carlo methods are therefore increasingly being explored as practical techniques for estimating these probabilities and likelihoods. This paper reviews the basic elements of the Markov chain Monte Carlo method and the method of sequential imputation, with an emphasis upon their applicability to genetic analysis. Three areas of applications are presented to demonstrate the versatility of Markov chain Monte Carlo for different types of genetic problems. A multilocus linkage analysis example is also presented to illustrate the sequential imputation method. Finally, important statistical issues of Markov chain Monte Carlo and sequential imputation, some of which are unique to genetic data, are discussed, and current solutions are outlined. 72 refs.
Multicanonical Monte Carlo for Simulation of Optical Links
NASA Astrophysics Data System (ADS)
Bononi, Alberto; Rusch, Leslie A.
Multicanonical Monte Carlo (MMC) is a simulation-acceleration technique for the estimation of the statistical distribution of a desired system output variable, given the known distribution of the system input variables. MMC, similarly to the powerful and well-studied method of importance sampling (IS) [1], is a useful method to efficiently simulate events occurring with probabilities smaller than ˜ 10 - 6, such as bit error rate (BER) and system outage probability. Modern telecommunications systems often employ forward error correcting (FEC) codes that allow pre-decoded channel error rates higher than 10 - 3; these systems are well served by traditional Monte-Carlo error counting. MMC and IS are, nonetheless, fundamental tools to both understand the statistics of the decision variable (as well as of any physical parameter of interest) and to validate any analytical or semianalytical BER calculation model. Several examples of such use will be provided in this chapter. As a case in point, outage probabilities are routinely below 10 - 6, a sweet spot where MMC and IS provide the most efficient (sometimes the only) solution to estimate outages.
Monte Carlo simulations and dosimetric studies of an irradiation facility
NASA Astrophysics Data System (ADS)
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
NASA Astrophysics Data System (ADS)
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-387, 10 June 2003
This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Monte Carlo algorithm for free energy calculation.
Bi, Sheng; Tong, Ning-Hua
2015-07-01
We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids
Chang, C.; Morales, M. A.
2016-11-10
We propose a method of implementing projected wave functions for second-quantized auxiliary- field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. AS such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.
Huang, Xiao-Lan; Zhang, Jia-Zhong
2008-10-19
Acidic persulfate oxidation is one of the most common procedures used to digest dissolved organic phosphorus compounds in water samples for total dissolved phosphorus determination. It has been reported that the rates of phosphoantimonylmolybdenum blue complex formation were significantly reduced in the digested sample matrix. This study revealed that the intermediate products of persulfate oxidation, not the slight change in pH, cause the slowdown of color formation. This effect can be remedied by adjusting digested samples pH to a near neural to decompose the intermediate products. No disturbing effects of chlorine on the phosphoantimonylmolybdenum blue formation in seawater were observed. It is noted that the modification of mixed reagent recipe cannot provide near neutral pH for the decomposition of the intermediate products of persulfate oxidation. This study provides experimental evidence not only to support the recommendation made in APHA standard methods that the pH of the digested sample must be adjusted to within a narrow range of sample, but also to improve the understanding of role of residue from persulfate decomposition on the subsequent phosphoantimonylmolybdenum blue formation.
Complete Monte Carlo Simulation of Neutron Scattering Experiments
NASA Astrophysics Data System (ADS)
Drosg, M.
2011-12-01
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of 3He(n,n)3He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data due to the
Chorin, Alexandre J.
2007-12-12
A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Quantum Monte Carlo applied to solids
Shulenburger, Luke; Mattsson, Thomas R.
2013-12-01
We apply diffusion quantum Monte Carlo to a broad set of solids, benchmarking the method by comparing bulk structural properties (equilibrium volume and bulk modulus) to experiment and density functional theory (DFT) based theories. The test set includes materials with many different types of binding including ionic, metallic, covalent, and van der Waals. We show that, on average, the accuracy is comparable to or better than that of DFT when using the new generation of functionals, including one hybrid functional and two dispersion corrected functionals. The excellent performance of quantum Monte Carlo on solids is promising for its application to heterogeneous systems and high-pressure/high-density conditions. Important to the results here is the application of a consistent procedure with regards to the several approximations that are made, such as finite-size corrections and pseudopotential approximations. This test set allows for any improvements in these methods to be judged in a systematic way.
Quantum Gibbs ensemble Monte Carlo
Fantoni, Riccardo; Moroni, Saverio
2014-09-21
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.
Isotropic Monte Carlo Grain Growth
Mason, J.
2013-04-25
IMCGG performs Monte Carlo simulations of normal grain growth in metals on a hexagonal grid in two dimensions with periodic boundary conditions. This may be performed with either an isotropic or a misorientation - and incliantion-dependent grain boundary energy.
Monte Carlo variance reduction approaches for non-Boltzmann tallies
Booth, T.E.
1992-12-01
Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed.
Barlow, Daniel E; Biffinger, Justin C; Cockrell-Zugell, Allison L; Lo, Michael; Kjoller, Kevin; Cook, Debra; Lee, Woo Kyung; Pehrsson, Pehr E; Crookes-Goodson, Wendy J; Hung, Chia-Suei; Nadeau, Lloyd J; Russell, John N
2016-08-02
AFM-IR is a combined atomic force microscopy-infrared spectroscopy method that shows promise for nanoscale chemical characterization of biological-materials interactions. In an effort to apply this method to quantitatively probe mechanisms of microbiologically induced polyurethane degradation, we have investigated monolayer clusters of ∼200 nm thick Pseudomonas protegens Pf-5 bacteria (Pf) on a 300 nm thick polyether-polyurethane (PU) film. Here, the impact of the different biological and polymer mechanical properties on the thermomechanical AFM-IR detection mechanism was first assessed without the additional complication of polymer degradation. AFM-IR spectra of Pf and PU were compared with FTIR and showed good agreement. Local AFM-IR spectra of Pf on PU (Pf-PU) exhibited bands from both constituents, showing that AFM-IR is sensitive to chemical composition both at and below the surface. One distinct difference in local AFM-IR spectra on Pf-PU was an anomalous ∼4× increase in IR peak intensities for the probe in contact with Pf versus PU. This was attributed to differences in probe-sample interactions. In particular, significantly higher cantilever damping was observed for probe contact with PU, with a ∼10× smaller Q factor. AFM-IR chemical mapping at single wavelengths was also affected. We demonstrate ratioing of mapping data for chemical analysis as a simple method to cancel the extreme effects of the variable probe-sample interactions.
Köber, Christin; Habermas, Tilmann
2017-03-23
Considering life stories as the most individual layer of personality (McAdams, 2013) implies that life stories, similar to personality traits, exhibit some stability throughout life. Although stability of personality traits has been extensively investigated, only little is known about the stability of life stories. We therefore tested the influence of age, of the proportion of normative age-graded life events, and of global text coherence on the stability of the most important memories and of brief entire life narratives as 2 representations of the life story. We also explored whether normative age-graded life events form more stable parts of life narratives. In a longitudinal life span study covering up to 3 measurements across 8 years and 6 age groups (N = 164) the stability of important memories and of entire life narratives was measured as the percentage of events and narrative segments which were repeated in later tellings. Stability increased between ages 8 and 24, leveling off in middle adulthood. Beyond age, stability of life narratives was also predicted by proportion of normative age-graded life events and by causal-motivational text coherence in younger participants. Memories of normative developmental and social transitional life events were more stable than other memories. Stability of segments of life narratives exceeded the stability of single most important memories. Findings are discussed in terms of cognitive, personality, and narrative psychology and point to research questions in each of these fields. (PsycINFO Database Record
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Brown, Forrest B.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Monte Carlo simulation of energy-dispersive x-ray fluorescence and applications
NASA Astrophysics Data System (ADS)
Li, Fusheng
Four key components with regards to Monte Carlo Library Least Squares (MCLLS) have been developed by the author. These include: a comprehensive and accurate Monte Carlo simulation code - CEARXRF5 with Differential Operators (DO) and coincidence sampling, Detector Response Function (DRF), an integrated Monte Carlo - Library Least-Squares (MCLLS) Graphical User Interface (GUI) visualization System (MCLLSPro) and a new reproducible and flexible benchmark experiment setup. All these developments or upgrades enable the MCLLS approach to be a useful and powerful tool for a tremendous variety of elemental analysis applications. CEARXRF, a comprehensive and accurate Monte Carlo code for simulating the total and individual library spectral responses of all elements, has been recently upgraded to version 5 by the author. The new version has several key improvements: input file format fully compatible with MCNP5, a new efficient general geometry tracking code, versatile source definitions, various variance reduction techniques (e.g. weight window mesh and splitting, stratifying sampling, etc.), a new cross section data storage and accessing method which improves the simulation speed by a factor of four and new cross section data, upgraded differential operators (DO) calculation capability, and also an updated coincidence sampling scheme which including K-L and L-L coincidence X-Rays, while keeping all the capabilities of the previous version. The new Differential Operators method is powerful for measurement sensitivity study and system optimization. For our Monte Carlo EDXRF elemental analysis system, it becomes an important technique for quantifying the matrix effect in near real time when combined with the MCLLS approach. An integrated visualization GUI system has been developed by the author to perform elemental analysis using iterated Library Least-Squares method for various samples when an initial guess is provided. This software was built on the Borland C++ Builder
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Intergenerational Correlation in Monte Carlo k-Eigenvalue Calculation
Ueki, Taro
2002-06-15
This paper investigates intergenerational correlation in the Monte Carlo k-eigenvalue calculation of a neutron effective multiplicative factor. To this end, the exponential transform for path stretching has been applied to large fissionable media with localized highly multiplying regions because in such media an exponentially decaying shape is a rough representation of the importance of source particles. The numerical results show that the difference between real and apparent variances virtually vanishes for an appropriate value of the exponential transform parameter. This indicates that the intergenerational correlation of k-eigenvalue samples could be eliminated by the adjoint biasing of particle transport. The relation between the biasing of particle transport and the intergenerational correlation is therefore investigated in the framework of collision estimators, and the following conclusion has been obtained: Within the leading order approximation with respect to the number of histories per generation, the intergenerational correlation vanishes when immediate importance is constant, and the immediate importance under simulation can be made constant by the biasing of particle transport with a function adjoint to the source neutron's distribution, i.e., the importance over all future generations.
Leigh, J P
1991-01-01
This study reports on research which looks for employee and job characteristics which correlate with absenteeism. A large cross-sectional national probability sample of workers employed for at least 20 hr per week is analyzed (n = 1308). The dependent variable is the number of self-reported absences during the past 14 days. Thirty-seven independent variables are considered. Ordinary Least Squares (multiple regressions), two-limit Tobits, and two-part models are used to assess the statistical and practical significance of possible covariates. Statistically significant predictors included health variables such as being overweight, complaining of insomnia, and hazardous working conditions; job characteristics such as inflexible house; and personal variables such as being a mother with small children. Variables reflecting dangerous working conditions appear to be the strongest correlates of absenteeism. Notable variables which do not predict absenteeism include age, race, wages, and job satisfaction. Future research should direct attention toward workers' health and working conditions as covariates of absenteeism, since they are strongly significant in this study and have been neglected by most absenteeism investigators.
Stamer, J.K.
1996-01-01
The temporal distribution of the herbicides alachlor, atrazine, cyanazine, and metolachlor was documented from September 1991 through August 1992 in the Platte River at Louisville, Neb., the drainage of the Central Nebraska Basins. Lincoln, Ornaha, and other municipalities withdraw groundwater for public supplies from the adjacent alluvium, which is hydraulically connected to the Platte River. Data were collected, in part, to provide information to managers, planners, and public utilities on the likelihood of water supplies being adversely affected by these herbicides. Three computational procedures - monthly means, monthly subsampling, and quarterly subsampling - were used to calculate annual mean herbicide concentrations. When the sampling was conducted quarterly rather than monthly, alachlor and atrazine concentrations were more likely to exceed their respective maximum contaminant levels (MCLs) of 2.0 μg/L and 3.0 μg/L, and cyanazine concentrations were more likely to exceed the health advisory level of 1.0 μg/L. The US Environmental Protection Agency has established a tentative MCL of 1.0 μg/L for cyanazine; data indicate that cyanazine is likely to exceed this level under most hydrologic conditions.
Uncertainty Analyses for Localized Tallies in Monte Carlo Eigenvalue Calculations
Mervin, Brenden T.; Maldonado, G Ivan; Mosher, Scott W; Wagner, John C
2011-01-01
It is well known that statistical estimates obtained from Monte Carlo criticality simulations can be adversely affected by cycle-to-cycle correlations in the fission source. In addition there are several other more fundamental issues that may lead to errors in Monte Carlo results. These factors can have a significant impact on the calculated eigenvalue, localized tally means and their associated standard deviations. In fact, modern Monte Carlo computational tools may generate standard deviation estimates that are a factor of five or more lower than the true standard deviation for a particular tally due to the inter-cycle correlations in the fission source. The magnitude of this under-prediction can climb as high as one hundred when combined with an ill-converged fission source or poor sampling techniques. Since Monte Carlo methods are widely used in reactor analysis (as a benchmarking tool) and criticality safety applications, an in-depth understanding of the effects of these issues must be developed in order to support the practical use of Monte Carlo software packages. A rigorous statistical analysis of localized tally results in eigenvalue calculations is presented using the SCALE/KENO-VI and MCNP Monte Carlo codes. The purpose of this analysis is to investigate the under-prediction in the uncertainty and its sensitivity to problem characteristics and calculational parameters, and to provide a comparative study between the two codes with respect to this under-prediction. It is shown herein that adequate source convergence along with proper specification of Monte Carlo parameters can reduce the magnitude of under-prediction in the uncertainty to reasonable levels; below a factor of 2 when inter-cycle correlations in the fission source are not a significant factor. In addition, through the use of a modified sampling procedure, the effects of inter-cycle correlations on both the mean value and standard deviation estimates can be isolated.
Perturbation Monte Carlo methods for tissue structure alterations.
Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome
2013-01-01
This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.
NASA Astrophysics Data System (ADS)
Devour, Brian M.; Bell, Eric F.
2016-06-01
We study the relative dust attenuation-inclination relation in 78 721 nearby galaxies using the axis ratio dependence of optical-near-IR colour, as measured by the Sloan Digital Sky Survey, the Two Micron All Sky Survey, and the Wide-field Infrared Survey Explorer. In order to avoid to the greatest extent possible attenuation-driven biases, we carefully select galaxies using dust attenuation-independent near- and mid-IR luminosities and colours. Relative u-band attenuation between face-on and edge-on disc galaxies along the star-forming main sequence varies from ˜0.55 mag up to ˜1.55 mag. The strength of the relative attenuation varies strongly with both specific star formation rate and galaxy luminosity (or stellar mass). The dependence of relative attenuation on luminosity is not monotonic, but rather peaks at M3.4 μm ≈ -21.5, corresponding to M* ≈ 3 × 1010 M⊙. This behaviour stands seemingly in contrast to some older studies; we show that older works failed to reliably probe to higher luminosities, and were insensitive to the decrease in attenuation with increasing luminosity for the brightest star-forming discs. Back-of-the-envelope scaling relations predict the strong variation of dust optical depth with specific star formation rate and stellar mass. More in-depth comparisons using the scaling relations to model the relative attenuation require the inclusion of star-dust geometry to reproduce the details of these variations (especially at high luminosities), highlighting the importance of these geometrical effects.
Bicmen, Can; Gunduz, Ayriz T.; Coskun, Meral; Senol, Gunes; Cirak, A. Kadri; Ozsoz, Ayse
2011-01-01
Although the sensitivity and specificity of nucleic acid amplification assays are high with smear-positive samples, the sensitivity with smear-negative and extrapulmonary samples for the diagnosis of tuberculosis in suspicious tuberculosis cases still remains to be investigated. This study evaluates the performance of the GenoType Mycobacteria Direct (GTMD) test for rapid molecular detection and identification of the Mycobacterium tuberculosis complex and four clinically important nontuberculous mycobacteria (M. avium, M. intracellulare, M. kansasii, and M. malmoense) in smear-negative samples. A total of 1,570 samples (1,103 bronchial aspiration, 127 sputum, and 340 extrapulmonary samples) were analyzed. When we evaluated the performance criteria in combination with a positive culture result and/or the clinical outcome of the patients, the overall sensitivity, specificity, and positive and negative predictive values were found to be 62.4, 99.5, 95.9, and 93.9%, respectively, whereas they were 63.2, 99.4, 95.7, and 92.8%, respectively, for pulmonary samples and 52.9, 100, 100, and 97.6%, respectively, for extrapulmonary samples. Among the culture-positive samples which had Mycobacterium species detectable by the GTMD test, three samples were identified to be M. intracellulare and one sample was identified to be M. avium. However, five M. intracellulare samples and an M. kansasii sample could not be identified by the molecular test and were found to be negative. The GTMD test has been a reliable, practical, and easy tool for rapid diagnosis of smear-negative pulmonary and extrapulmonary tuberculosis so that effective precautions may be taken and appropriate treatment may be initiated. However, the low sensitivity level should be considered in the differentiation of suspected tuberculosis and some other clinical condition until the culture result is found to be negative and a true picture of the clinical outcome is obtained. PMID:21653780
Estimating rare events in biochemical systems using conditional sampling
NASA Astrophysics Data System (ADS)
Sundar, V. S.
2017-01-01
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
A Monte Carlo Approach to the Design, Assembly, and Evaluation of Multistage Adaptive Tests
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2008-01-01
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
MC21 analysis of the nuclear energy agency Monte Carlo performance benchmark problem
Kelly, D. J.; Sutton, T. M.; Wilson, S. C.
2012-07-01
Due to the steadily decreasing cost and wider availability of large scale computing platforms, there is growing interest in the prospects for the use of Monte Carlo for reactor design calculations that are currently performed using few-group diffusion theory or other low-order methods. To facilitate the monitoring of the progress being made toward the goal of practical full-core reactor design calculations using Monte Carlo, a performance benchmark has been developed and made available through the Nuclear Energy Agency. A first analysis of this benchmark using the MC21 Monte Carlo code was reported on in 2010, and several practical difficulties were highlighted. In this paper, a newer version of MC21 that addresses some of these difficulties has been applied to the benchmark. In particular, the confidence-interval-determination method has been improved to eliminate source correlation bias, and a fission-source-weighting method has been implemented to provide a more uniform distribution of statistical uncertainties. In addition, the Forward-Weighted, Consistent-Adjoint-Driven Importance Sampling methodology has been applied to the benchmark problem. Results of several analyses using these methods are presented, as well as results from a very large calculation with statistical uncertainties that approach what is needed for design applications. (authors)
NASA Astrophysics Data System (ADS)
Li, Xiang
2016-10-01
Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
The Metropolis Monte Carlo Method in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2003-11-01
A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
NASA Astrophysics Data System (ADS)
Bosá, Ivana; Rothstein, Stuart M.
2004-09-01
We append forward walking to a diffusion Monte Carlo algorithm which maintains a fixed number of walkers. This removes the importance sampling bias of expectation values of operators which do not commute with the Hamiltonian. We demonstrate the effectiveness of this approach by employing three importance sampling functions for the hydrogen atom ground state, two very crude. We estimate moments of the electron-nuclear distance, static polarizabilities, and high-order hyperpolarizabilites up to the fourth power in the electric field, where no use is made of the finite field approximation. The results agree with the analytical values, with a statistical error which increases substantially with decreasing overlap of the guiding function with the exact wave function.
Changes in optical properties of biological tissue: experiment and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Kaspar, Pavel; Prokopyeva, Elena; Tománek, Pavel; Grmela, Lubomír.
2016-12-01
Biological tissue is a very complex, yet important material to describe and analyze. Its properties are affected by chemical processes too numerous to easily understand and describe. By simplifying and grouping some aspects together we are able to create a model for simulating behavior of a photon inside of a biological sample. Using the Monte Carlo method an algorithm for calculating photon propagation through the tissue based on several optical parameters, like absorption and scattering coefficients, refractive indices and optical anisotropy, can be created. Based on some of the results of the simulation a comparative measurement on a muscle sample was performed to prove the usefulness of such model and to describe changes in the tissue sample based on the aforementioned optical parameters in both real life and the simulation.
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
MontePython: Implementing Quantum Monte Carlo using Python
NASA Astrophysics Data System (ADS)
Nilsen, Jon Kristian
2007-11-01
We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.
Electronic structure quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Bajdich, Michal; Mitas, Lubos
2009-04-01
Quantum Monte Carlo (QMC) is an advanced simulation methodology for studies of manybody quantum systems. The QMC approaches combine analytical insights with stochastic computational techniques for efficient solution of several classes of important many-body problems such as the stationary Schrödinger equation. QMC methods of various flavors have been applied to a great variety of systems spanning continuous and lattice quantum models, molecular and condensed systems, BEC-BCS ultracold condensates, nuclei, etc. In this review, we focus on the electronic structure QMC, i.e., methods relevant for systems described by the electron-ion Hamiltonians. Some of the key QMC achievements include direct treatment of electron correlation, accuracy in predicting energy differences and favorable scaling in the system size. Calculations of atoms, molecules, clusters and solids have demonstrated QMC applicability to real systems with hundreds of electrons while providing 90-95% of the correlation energy and energy differences typically within a few percent of experiments. Advances in accuracy beyond these limits are hampered by the so-called fixed-node approximation which is used to circumvent the notorious fermion sign problem. Many-body nodes of fermion states and their properties have therefore become one of the important topics for further progress in predictive power and efficiency of QMC calculations. Some of our recent results on the wave function nodes and related nodal domain topologies will be briefly reviewed. This includes analysis of few-electron systems and descriptions of exact and approximate nodes using transformations and projections of the highly-dimensional nodal hypersurfaces into the 3D space. Studies of fermion nodes offer new insights into topological properties of eigenstates such as explicit demonstrations that generic fermionic ground states exhibit the minimal number of two nodal domains. Recently proposed trial wave functions based on Pfaffians with
Semantic Importance Sampling for Statistical Model Checking
2015-01-16
SMT calls while maintaining correctness. Finally, we implement SIS in a tool called osmosis and use it to verify a number of stochastic systems with...2 surveys related work. Section 3 presents background definitions and concepts. Section 4 presents SIS, and Section 5 presents our tool osmosis . In...which I∗M|=Φ(x) = 1. We do this by first randomly selecting a cube c from C∗ with uniform probability since each cube has equal probability 9 5. OSMOSIS
Monte Carlo tests of the ELIPGRID-PC algorithm
Davidson, J.R.
1995-04-01
The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.
SPQR: a Monte Carlo reactor kinetics code. [LMFBR
Cramer, S.N.; Dodds, H.L.
1980-02-01
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations.
Photon beam description in PEREGRINE for Monte Carlo dose calculations
Cox, L. J., LLNL
1997-03-04
Goal of PEREGRINE is to provide capability for accurate, fast Monte Carlo calculation of radiation therapy dose distributions for routine clinical use and for research into efficacy of improved dose calculation. An accurate, efficient method of describing and sampling radiation sources is needed, and a simple, flexible solution is provided. The teletherapy source package for PEREGRINE, coupled with state-of-the-art Monte Carlo simulations of treatment heads, makes it possible to describe any teletherapy photon beam to the precision needed for highly accurate Monte Carlo dose calculations in complex clinical configurations that use standard patient modifiers such as collimator jaws, wedges, blocks, and/or multi-leaf collimators. Generic beam descriptions for a class of treatment machines can readily be adjusted to yield dose calculation to match specific clinical sites.
Quantum annealing of an Ising spin-glass by Green's function Monte Carlo.
Stella, Lorenzo; Santoro, Giuseppe E
2007-03-01
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Frequency domain optical tomography using a Monte Carlo perturbation method
NASA Astrophysics Data System (ADS)
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
Suitable Candidates for Monte Carlo Solutions.
ERIC Educational Resources Information Center
Lewis, Jerome L.
1998-01-01
Discusses Monte Carlo methods, powerful and useful techniques that rely on random numbers to solve deterministic problems whose solutions may be too difficult to obtain using conventional mathematics. Reviews two excellent candidates for the application of Monte Carlo methods. (ASK)
Applications of Monte Carlo Methods in Calculus.
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Monte Carlo methods to calculate impact probabilities
NASA Astrophysics Data System (ADS)
Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.
2014-09-01
Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward
Malektaji, Siavash; Lima, Ivan T; Sherif, Sherif S
2014-04-01
We developed a Monte Carlo-based simulator of optical coherence tomography (OCT) imaging for turbid media with arbitrary spatial distributions. This simulator allows computation of both Class I diffusive reflectance due to ballistic and quasiballistic scattered photons and Class II diffusive reflectance due to multiple scattered photons. It was implemented using a tetrahedron-based mesh and importance sampling to significantly reduce computational time. Our simulation results were verified by comparing them with results from two previously validated OCT simulators for multilayered media. We present simulation results for OCT imaging of a sphere inside a background slab, which would not have been possible with earlier simulators. We also discuss three important aspects of our simulator: (1) resolution, (2) accuracy, and (3) computation time. Our simulator could be used to study important OCT phenomena and to design OCT systems with improved performance.
Calculating Pi Using the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Williamson, Timothy
2013-11-01
During the summer of 2012, I had the opportunity to participate in a research experience for teachers at the center for sustainable energy at Notre Dame University (RET @ cSEND) working with Professor John LoSecco on the problem of using antineutrino detection to accurately determine the fuel makeup and operating power of nuclear reactors. During full power operation, a reactor may produce 1021 antineutrinos per second with approximately 100 per day being detected. While becoming familiar with the design and operation of the detectors, and how total antineutrino flux could be obtained from such a small sample, I read about a simulation program called Monte Carlo. Further investigation led me to the Monte Carlo method page of Wikipedia2 where I saw an example of approximating pi using this simulation. Other examples where this method was applied were typically done with computer simulations2 or purely mathematical.3 It is my belief that this method may be easily related to the students by performing the simple activity of sprinkling rice on an arc drawn in a square. The activity that follows was inspired by those simulations and was used by my AP Physics class last year with very good results.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
Henyey − Greenstein scattering indicatrix SUBROUTINE Calculation of spectral (group) phase function of Monte - Carlo Simulation of Plumes...calculations; b) Computing code SRT-RTMC-NSM intended for narrow band Spectral Radiation Transfer Ray Tracing Simulation by the Monte - Carlo method with...project) Computing codes for random ( Monte - Carlo ) simulation of molecular lines with reference to a problem of radiation transfer
Estimators of the Squared Cross-Validity Coefficient: A Monte Carlo Investigation.
ERIC Educational Resources Information Center
And Others; Drasgow, Fritz
1979-01-01
A Monte Carlo experiment was used to evaluate four procedures for estimating the population squared cross-validity of a sample least squares regression equation. One estimator was particularly recommended. (Author/BH)
Play It Again: Teaching Statistics with Monte Carlo Simulation
ERIC Educational Resources Information Center
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
Complete Monte Carlo Simulation of Neutron Scattering Experiments
Drosg, M.
2011-12-13
In the far past, it was not possible to accurately correct for the finite geometry and the finite sample size of a neutron scattering set-up. The limited calculation power of the ancient computers as well as the lack of powerful Monte Carlo codes and the limitation in the data base available then prevented a complete simulation of the actual experiment. Using e.g. the Monte Carlo neutron transport code MCNPX [1], neutron scattering experiments can be simulated almost completely with a high degree of precision using a modern PC, which has a computing power that is ten thousand times that of a super computer of the early 1970s. Thus, (better) corrections can also be obtained easily for previous published data provided that these experiments are sufficiently well documented. Better knowledge of reference data (e.g. atomic mass, relativistic correction, and monitor cross sections) further contributes to data improvement. Elastic neutron scattering experiments from liquid samples of the helium isotopes performed around 1970 at LANL happen to be very well documented. Considering that the cryogenic targets are expensive and complicated, it is certainly worthwhile to improve these data by correcting them using this comparatively straightforward method. As two thirds of all differential scattering cross section data of {sup 3}He(n,n){sup 3}He are connected to the LANL data, it became necessary to correct the dependent data measured in Karlsruhe, Germany, as well. A thorough simulation of both the LANL experiments and the Karlsruhe experiment is presented, starting from the neutron production, followed by the interaction in the air, the interaction with the cryostat structure, and finally the scattering medium itself. In addition, scattering from the hydrogen reference sample was simulated. For the LANL data, the multiple scattering corrections are smaller by a factor of five at least, making this work relevant. Even more important are the corrections to the Karlsruhe data
Searching for convergence in phylogenetic Markov chain Monte Carlo.
Beiko, Robert G; Keith, Jonathan M; Harlow, Timothy J; Ragan, Mark A
2006-08-01
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.
Classical Perturbation Theory for Monte Carlo Studies of System Reliability
Lewins, Jeffrey D.
2001-03-15
A variational principle for a Markov system allows the derivation of perturbation theory for models of system reliability, with prospects of extension to generalized Markov processes of a wide nature. It is envisaged that Monte Carlo or stochastic simulation will supply the trial functions for such a treatment, which obviates the standard difficulties of direct analog Monte Carlo perturbation studies. The development is given in the specific mode for first- and second-order theory, using an example with known analytical solutions. The adjoint equation is identified with the importance function and a discussion given as to how both the forward and backward (adjoint) fields can be obtained from a single Monte Carlo study, with similar interpretations for the additional functions required by second-order theory. Generalized Markov models with age-dependence are identified as coming into the scope of this perturbation theory.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
Parallel tempering Monte Carlo in LAMMPS.
Rintoul, Mark Daniel; Plimpton, Steven James; Sears, Mark P.
2003-11-01
We present here the details of the implementation of the parallel tempering Monte Carlo technique into a LAMMPS, a heavily used massively parallel molecular dynamics code at Sandia. This technique allows for many replicas of a system to be run at different simulation temperatures. At various points in the simulation, configurations can be swapped between different temperature environments and then continued. This allows for large regions of energy space to be sampled very quickly, and allows for minimum energy configurations to emerge in very complex systems, such as large biomolecular systems. By including this algorithm into an existing code, we immediately gain all of the previous work that had been put into LAMMPS, and allow this technique to quickly be available to the entire Sandia and international LAMMPS community. Finally, we present an example of this code applied to folding a small protein.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; Mrenna, Stephen; Park, Myeonghun
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists and experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.
NASA Astrophysics Data System (ADS)
Šolc, Jaroslav; Dryák, Pavel; Moser, Hannah; Branger, Thierry; García-Toraño, Eduardo; Peyrés, Virginia; Tzika, Faidra; Lutter, Guillaume; Capogni, Marco; Fazio, Aldo; Luca, Aurelian; Vodenik, Branko; Oliveira, Carlos; Saraiva, Andre; Szucs, Laszlo; Dziel, Tomasz; Burda, Oleksiy; Arnold, Dirk; Martinkovič, Jozef; Siiskonen, Teemu; Mattila, Aleksi
2015-11-01
One of the outputs of the European Metrology Research Programme project "Ionising radiation metrology for the metallurgical industry" (MetroMetal) was a recommendation on a novel radionuclide specific detector system optimised for the measurement of radioactivity in metallurgical samples. The detection efficiency of the recommended system for the standards of cast steel, slag and fume dust developed within the project was characterized by Monte Carlo (MC) simulations performed using different MC codes. Capabilities of MC codes were also tested for simulation of true coincidence summing (TCS) effects for several radionuclides of interest in the metallurgical industry. The TCS correction factors reached up to 32% showing that the TCS effects are of high importance in close measurement geometries met in routine analyses of metallurgical samples.
NASA Astrophysics Data System (ADS)
Goldman, Saul
1983-10-01
A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.
Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy.
Paro, Autumn D; Hossain, Mainul; Webster, Thomas J; Su, Ming
Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron-hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy.
Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy
Paro, Autumn D; Hossain, Mainul; Webster, Thomas J; Su, Ming
2016-01-01
Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. PMID:27695329
Trahan, Travis J.; Gentile, Nicholas A.
2012-09-10
Statistical uncertainty is inherent to any Monte Carlo simulation of radiation transport problems. In space-angle-frequency independent radiative transfer calculations, the uncertainty in the solution is entirely due to random sampling of source photon emission times. We have developed a modification to the Implicit Monte Carlo algorithm that eliminates noise due to sampling of the emission time of source photons. In problems that are independent of space, angle, and energy, the new algorithm generates a smooth solution, while a standard implicit Monte Carlo solution is noisy. For space- and angle-dependent problems, the new algorithm exhibits reduced noise relative to standard implicit Monte Carlo in some cases, and comparable noise in all other cases. In conclusion, the improvements are limited to short time scales; over long time scales, noise due to random sampling of spatial and angular variables tends to dominate the noise reduction from the new algorithm.
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
NASA Astrophysics Data System (ADS)
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Reduced Variance for Material Sources in Implicit Monte Carlo
Urbatsch, Todd J.
2012-06-25
Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Bojarski, P; Synak, A; Kułak, L; Rangelowa-Jankowska, S; Kubicki, A; Grobelna, B
2012-01-01
Monte-Carlo simulation method is described and applied as an efficient tool to analyze experimental data in the presence of energy transfer in selected systems, where the use of analytical approaches is limited or even impossible. Several numerical and physical problems accompanying Monte-Carlo simulation are addressed. It is shown that the Monte-Carlo simulation enables to obtain orientation factor in partly ordered systems and other important energy transfer parameters unavailable directly from experiments. It is shown how Monte-Carlo simulation can predict some important features of energy transport like its directional character in ordered media.
Lunar Regolith Albedos Using Monte Carlos
NASA Technical Reports Server (NTRS)
Wilson, T. L.; Andersen, V.; Pinsky, L. S.
2003-01-01
The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
NASA Astrophysics Data System (ADS)
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Monte Carlo Strategies for Selecting Parameter Values in Simulation Experiments.
Leigh, Jessica W; Bryant, David
2015-09-01
Simulation experiments are used widely throughout evolutionary biology and bioinformatics to compare models, promote methods, and test hypotheses. The biggest practical constraint on simulation experiments is the computational demand, particularly as the number of parameters increases. Given the extraordinary success of Monte Carlo methods for conducting inference in phylogenetics, and indeed throughout the sciences, we investigate ways in which Monte Carlo framework can be used to carry out simulation experiments more efficiently. The key idea is to sample parameter values for the experiments, rather than iterate through them exhaustively. Exhaustive analyses become completely infeasible when the number of parameters gets too large, whereas sampled approaches can fare better in higher dimensions. We illustrate the framework with applications to phylogenetics and genetic archaeology.
Active neutron multiplicity analysis and Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Krick, M. S.; Ensslin, N.; Langner, D. G.; Miller, M. C.; Siebelist, R.; Stewart, J. E.; Ceo, R. N.; May, P. K.; Collins, L. L., Jr.
Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined.
Invariance on Multivariate Results: A Monte Carlo Study of Canonical Coefficients.
ERIC Educational Resources Information Center
Thompson, Bruce
In the present study Monte Carlo methods were employed to evaluate the degree to which canonical function and structure coefficients may be differentially sensitive to sampling error. Sampling error influences were investigated across variations in variable and sample (n) sizes, and across variations in average within-set correlation sizes and in…
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Semistochastic Projector Monte Carlo Method
NASA Astrophysics Data System (ADS)
Petruzielo, F. R.; Holmes, A. A.; Changlani, Hitesh J.; Nightingale, M. P.; Umrigar, C. J.
2012-12-01
We introduce a semistochastic implementation of the power method to compute, for very large matrices, the dominant eigenvalue and expectation values involving the corresponding eigenvector. The method is semistochastic in that the matrix multiplication is partially implemented numerically exactly and partially stochastically with respect to expectation values only. Compared to a fully stochastic method, the semistochastic approach significantly reduces the computational time required to obtain the eigenvalue to a specified statistical uncertainty. This is demonstrated by the application of the semistochastic quantum Monte Carlo method to systems with a sign problem: the fermion Hubbard model and the carbon dimer.
Research in the Mont Terri Rock laboratory: Quo vadis?
NASA Astrophysics Data System (ADS)
Bossart, Paul; Thury, Marc
During the past 10 years, the 12 Mont Terri partner organisations ANDRA, BGR, CRIEPI, ENRESA, FOWG (now SWISSTOPO), GRS, HSK, IRSN, JAEA, NAGRA, OBAYASHI and SCK-CEN have jointly carried out and financed a research programme in the Mont Terri Rock Laboratory. An important strategic question for the Mont Terri project is what type of new experiments should be carried out in the future. This question has been discussed among partner delegates, authorities, scientists, principal investigators and experiment delegates. All experiments at Mont Terri - past, ongoing and future - can be assigned to the following three categories: (1) process and mechanism understanding in undisturbed argillaceous formations, (2) experiments related to excavation- and repository-induced perturbations and (3) experiments related to repository performance during the operational and post-closure phases. In each of these three areas, there are still open questions and hence potential experiments to be carried out in the future. A selection of key issues and questions which have not, or have only partly been addressed so far and in which the project partners, but also the safety authorities and other research organisations may be interested, are presented in the following. The Mont Terri Rock Laboratory is positioned as a generic rock laboratory, where research and development is key: mainly developing methods for site characterisation of argillaceous formations, process understanding and demonstration of safety. Due to geological constraints, there will never be a site specific rock laboratory at Mont Terri. The added value for the 12 partners in terms of future experiments is threefold: (1) the Mont Terri project provides an international scientific platform of high reputation for research on radioactive waste disposal (= state-of-the-art research in argillaceous materials); (2) errors are explicitly allowed (= rock laboratory as a “playground” where experience is often gained through
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
Monte Carlo, confidence interval, central limit theorem, number of iterations, Wilson score method, Wald method, normal probability plot 16. SECURITY...Iterations 16 6. Conclusions 17 7. References and Notes 20 Appendix. MATLAB Code to Produce a Normal Probability Plot for Data in Array A 23...for normality can be performed to quantify the confidence level of a normality assumption. The basic idea of an NPP is to plot the sample data in
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
Romero, V.J.; Bankston, S.D.
1998-03-01
Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.
Diamantis, Nikolaos G; Manousakis, Efstratios
2013-10-01
The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the "flat histogram diagrammatic Monte Carlo" method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.
NASA Astrophysics Data System (ADS)
Diamantis, Nikolaos G.; Manousakis, Efstratios
2013-10-01
The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the “flat histogram diagrammatic Monte Carlo” method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.
Tryggestad, E; Armour, M; Iordachita, I; Verhaegen, F; Wong, J W
2011-01-01
Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP’s treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. We find that the SARRP provides sufficient therapeutic dose rates ranging from 102 to 228 cGy min−1 at 1 cm depth for the available set of high-precision beams ranging from 0.5 to 5 mm in size. In terms of depth–dose, the mean of the absolute percentage differences between the Monte Carlo calculations and measurement is 3.4% over the full range of sampled depths spanning 0.5–7.2 cm for the 3 and 5 mm beams. The measured and computed profiles for these beams agree well overall; of note, good agreement is observed in the profile tails. Especially for the smallest 0.5 and 1 mm beams, including a more realistic description of the effective x-ray source into the Monte Carlo model may be important. PMID:19687532
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Polarized light in birefringent samples (Conference Presentation)
NASA Astrophysics Data System (ADS)
Chue-Sang, Joseph; Bai, Yuqiang; Ramella-Roman, Jessica
2016-02-01
Full-field polarized light imaging provides the capability of investigating the alignment and density of birefringent tissue such as collagen abundantly found in scars, the cervix, and other sites of connective tissue. These can be indicators of disease and conditions affecting a patient. Two-dimensional polarized light Monte Carlo simulations which allow the input of an optical axis of a birefringent sample relative to a detector have been created and validated using optically anisotropic samples such as tendon yet, unlike tendon, most collagen-based tissues is significantly less directional and anisotropic. Most important is the incorporation of three-dimensional structures for polarized light to interact with in order to simulate more realistic biological environments. Here we describe the development of a new polarization sensitive Monte Carlo capable to handle birefringent materials with any spatial distribution. The new computational platform is based on tissue digitization and classification including tissue birefringence and principle axis of polarization. Validation of the system was conducted both numerically and experimentally.
Harnessing graphical structure in Markov chain Monte Carlo learning
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance.
Johannesson, G; Chow, F K; Glascoe, L; Glaser, R E; Hanley, W G; Kosovic, B; Krnjajic, M; Larsen, S C; Lundquist, J K; Mirin, A A; Nitao, J J; Sugiyama, G A
2005-11-16
Atmospheric releases of hazardous materials are highly effective means to impact large populations. We propose an atmospheric event reconstruction framework that couples observed data and predictive computer-intensive dispersion models via Bayesian methodology. Due to the complexity of the model framework, a sampling-based approach is taken for posterior inference that combines Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) strategies.
Quantum Monte Carlo Calculations Applied to Magnetic Molecules
Engelhardt, Larry
2006-01-01
We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these
Markov Chain Monte Carlo and Irreversibility
NASA Astrophysics Data System (ADS)
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
Schultz, M G
1974-01-01
There have been 4 waves of imported malaria in the USA. They occurred during the colonization of the country and during the Second World War, the UN Police Action in Korea, and the Viet-Nam conflict. The first 3 episodes are briefly described and the data on imported malaria from Viet-Nam are discussed in detail.Endemic malaria is resurgent in many tropical countries and international travel is also on the rise. This increases the likelihood of malaria being imported from an endemic area and introduced into a receptive area. The best defence for countries threatened by imported malaria is a vigorous surveillance programme. The principles of surveillance are discussed and an example of their application is provided by a description of the methods used to conduct surveillance of malaria in the USA.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-03-13
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
Modeling intermittent generation (IG) in a Monte-Carlo regional system analysis model
Yamayee, Z.A.
1984-01-01
A simulation model capable of simulating the operation of a given load/resource scenario is developed under the umbrella of PNUCC's System Analysis Committee. This model, called System Analysis Model (SAM), employs the Monte-Carlo technique to incorporate quantifiable uncertainties. Explicit uncertainties in SAM include: hydro conditions, load forecast errors, construction duration, availability of thermal units, renewable resources (wind, solar, geothermal, and biomass), cogeneration, and conservation. This paper presents an approach to modeling renewable resources, especially wind energy availability. Due to randomness of wind velocity at a wind site, and randomness from one site to another, it is important to have a model of uncertain wind energy availability. The model starts with historical hourly wind data at each site in the area covered by the Pacific Northwest Power Act (7). Using wind data, machine and site characteristics, along with Justus, et al. time series model for simulating hourly wind power, hourly energy for each site is calculated. Assuming independence between different sites, a probability density function for each month is computed. These density functions along with a uniformly distributed random number generator are used to draw observed seasonal and/or monthly energy for each of the Monte-Carlo games. The monthly observed energy along with a typical hourly shape for a month are used to calculate hourly observed wind energy for the hourly portion of SAM. A sample case study is made to show the approach.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Structural mapping of Maxwell Montes
NASA Technical Reports Server (NTRS)
Keep, Myra; Hansen, Vicki L.
1993-01-01
Four sets of structures were mapped in the western and southern portions of Maxwell Montes. An early north-trending set of penetrative lineaments is cut by dominant, spaced ridges and paired valleys that trend northwest. To the south the ridges and valleys splay and graben form in the valleys. The spaced ridges and graben are cut by northeast-trending graben. The northwest-trending graben formed synchronously with or slightly later than the spaced ridges. Formation of the northeast-trending graben may have overlapped with that of the northwest-trending graben, but occurred in a spatially distinct area (regions of 2 deg slope). Graben formation, with northwest-southeast extension, may be related to gravity-sliding. Individually and collectively these structures are too small to support the immense topography of Maxwell, and are interpreted as parasitic features above a larger mass that supports the mountain belt.
Challenges of Monte Carlo Transport
Long, Alex Roberts
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
More about Zener drag studies with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz
2013-03-01
Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.
Wet-based glaciation in Phlegra Montes, Mars.
NASA Astrophysics Data System (ADS)
Gallagher, Colman; Balme, Matt
2016-04-01
Eskers are sinuous landforms composed of sediments deposited from meltwaters in ice-contact glacial conduits. This presentation describes the first definitive identification of eskers on Mars still physically linked with their parent system (1), a Late Amazonian-age glacier (~150 Ma) in Phlegra Montes. Previously described Amazonian-age glaciers on Mars are generally considered to have been dry based, having moved by creep in the absence of subglacial water required for sliding, but our observations indicate significant sub-glacial meltwater routing. The confinement of the Phlegra Montes glacial system to a regionally extensive graben is evidence that the esker formed due to sub-glacial melting in response to an elevated, but spatially restricted, geothermal heat flux rather than climate-induced warming. Now, however, new observations reveal the presence of many assemblages of glacial abrasion forms and associated channels that could be evidence of more widespread wet-based glaciation in Phlegra Montes, including the collapse of several distinct ice domes. This landform assemblage has not been described in other glaciated, mid-latitude regions of the martian northern hemisphere. Moreover, Phlegra Montes are flanked by lowlands displaying evidence of extensive volcanism, including contact between plains lava and piedmont glacial ice. These observations provide a rationale for investigating non-climatic forcing of glacial melting and associated landscape development on Mars, and can build on insights from Earth into the importance of geothermally-induced destabilisation of glaciers as a key amplifier of climate change. (1) Gallagher, C. and Balme, M. (2015). Eskers in a complete, wet-based glacial system in the Phlegra Montes region, Mars, Earth and Planetary Science Letters, 431, 96-109.
Semiclassical Monte-Carlo approach for modelling non-adiabatic dynamics in extended molecules
Gorshkov, Vyacheslav N.; Tretiak, Sergei; Mozyrsky, Dmitry
2013-01-01
Modelling of non-adiabatic dynamics in extended molecular systems and solids is a next frontier of atomistic electronic structure theory. The underlying numerical algorithms should operate only with a few quantities (that can be efficiently obtained from quantum chemistry), provide a controlled approximation (which can be systematically improved) and capture important phenomena such as branching (multiple products), detailed balance and evolution of electronic coherences. Here we propose a new algorithm based on Monte-Carlo sampling of classical trajectories, which satisfies the above requirements and provides a general framework for existing surface hopping methods for non-adiabatic dynamics simulations. In particular, our algorithm can be viewed as a post-processing technique for analysing numerical results obtained from the conventional surface hopping approaches. Presented numerical tests for several model problems demonstrate efficiency and accuracy of the new method. PMID:23864100
Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research
Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.
2002-01-01
Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.
COSMOABC: Likelihood-free inference via Population Monte Carlo Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Ishida, E. E. O.; Vitenti, S. D. P.; Penna-Lima, M.; Cisewski, J.; de Souza, R. S.; Trindade, A. M. M.; Cameron, E.; Busti, V. C.
2015-11-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogues. Here we present COSMOABC, a Python ABC sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code is very flexible and can be easily coupled to an external simulator, while allowing to incorporate arbitrary distance and prior functions. As an example of practical application, we coupled COSMOABC with the NUMCOSMO library and demonstrate how it can be used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function. COSMOABC is published under the GPLv3 license on PyPI and GitHub and documentation is available at http://goo.gl/SmB8EX.
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E.; Brown, Forrest B.; Kiedrowski, Brian C.; Martin, William R.
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Estimation of beryllium ground state energy by Monte Carlo simulation
Kabir, K. M. Ariful; Halder, Amal
2015-05-15
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Fast Off-Lattice Monte Carlo Simulations with Soft Potentials
NASA Astrophysics Data System (ADS)
Zong, Jing; Yang, Delian; Yin, Yuhua; Zhang, Xinghua; Wang, Qiang (David)
2011-03-01
Fast off-lattice Monte Carlo simulations with soft repulsive potentials that allow particle overlapping give orders of magnitude faster/better sampling of the configurational space than conventional molecular simulations with hard-core repulsions (such as the hard-sphere or Lennard-Jones repulsion). Here we present our fast off-lattice Monte Carlo simulations ranging from small-molecule soft spheres and liquid crystals to polymeric systems including homopolymers and rod-coil diblock copolymers. The simulation results are compared with various theories based on the same Hamiltonian as in the simulations (thus without any parameter-fitting) to quantitatively reveal the consequences of approximations in these theories. Q. Wang and Y. Yin, J. Chem. Phys., 130, 104903 (2009).
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C.
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using a simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.
Sign problem and Monte Carlo calculations beyond Lefschetz thimbles
Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...
2016-05-10
We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less
Implicit sampling and its connection to variational data assimilation
NASA Astrophysics Data System (ADS)
Morzfeld, Matthias; Chorin, Alexandre
2013-04-01
Implicit sampling is a Monte Carlo importance sampling method and we present its application to data assimilation. The basic idea is to construct an importance function such that the samples (often called particles) are guided towards the high-probability regions of the posterior pdf, which is defined jointly by the model and the data. This is done in two steps. First, the high-probability regions are identified via numerical minimization; second, samples within the high-probability regions are obtained by solving data dependent algebraic equations with a random right-hand-side. Specifically, one first finds the mode of the posterior pdf and then solves algebraic equations to obtain samples in the neighborhood of this mode. In variational data assimilation, one finds the mode of the posterior pdf. There is thus a connection between implicit sampling and variational data assimilation. In particular, one can turn variational codes into implicit sampling methods (filters or smoothers) by adding a sampling step (i.e. solving simple algebraic equations). The benefit can be that implicit sampling can be used to obtain the conditional mean, which is the minimum mean square error estimate, as well as quantitative information about the uncertainty of this state estimate. We present an example in detail to explain the implicit sampling and the resulting data assimilation algorithms in their variational implementation.
Preliminar results of paleontological salvage at Belo Monte Powerplant construction.
Tomassi, H Z; Almeida, C M; Ferreira, B C; Brito, M B; Barberi, M; Rodrigues, G C; Teixeira, S P; Capuzzo, J P; Gama-Júnior, J M; Santos, M G K G
2015-08-01
In this paper some preliminary fossil specimens are presented. They represent a collection sampled by Belo Monte's Programa de Salvamento do Patrimônio Paleontológico (PSPP), which includes unprecedented invertebrate fauna and fossil vertebrates from Pitinga, Jatapu, Manacapuru, Maecuru e Alter do Chão formations from Amazonas basin, Brazil. The Belo Monte paleontological salvage was able to recover 495 microfossil samples and 1744 macrofossil samples on 30 months of sampling activities, and it is still ongoing. The macrofossils identified are possible plant remains, ichnofossils, graptolites, brachiopods, molluscs, athropods, Agnatha, palynomorphs (miosphores, acritarchs, algae cysts, fungi spores and unidentified types) and unidentified fossils. However, deep scientific research is not part of the scope of the program, and this collection must be further studied by researchers who visit Museu Paraense Emilio Goeldi, where the fossils will be housed. More material will be collected until the end of the program. The collection sampled allows a mosaic composition with the necessary elements to assign, in later papers, taxonomic features which may lead to accurate species identification and palaeoenvironmental interpretations.
Lifting—A nonreversible Markov chain Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Vucelja, Marija
2016-12-01
Markov chain Monte Carlo algorithms are invaluable tools for exploring stationary properties of physical systems, especially in situations where direct sampling is unfeasible. Common implementations of Monte Carlo algorithms employ reversible Markov chains. Reversible chains obey detailed balance and thus ensure that the system will eventually relax to equilibrium, though detailed balance is not necessary for convergence to equilibrium. We review nonreversible Markov chains, which violate detailed balance and yet still relax to a given target stationary distribution. In particular cases, nonreversible Markov chains are substantially better at sampling than the conventional reversible Markov chains with up to a square root improvement in the convergence time to the steady state. One kind of nonreversible Markov chain is constructed from the reversible ones by enlarging the state space and by modifying and adding extra transition rates to create non-reversible moves. Because of the augmentation of the state space, such chains are often referred to as lifted Markov Chains. We illustrate the use of lifted Markov chains for efficient sampling on several examples. The examples include sampling on a ring, sampling on a torus, the Ising model on a complete graph, and the one-dimensional Ising model. We also provide a pseudocode implementation, review related work, and discuss the applicability of such methods.
Burrows, John
2013-04-01
An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.
Atomistic Monte Carlo Simulation of Lipid Membranes
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol. PMID:24469314
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
Drag coefficient modeling for grace using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; McLaughlin, Craig A.; Sutton, Eric K.
2013-12-01
Drag coefficient is a major source of uncertainty in predicting the orbit of a satellite in low Earth orbit (LEO). Computational methods like the Test Particle Monte Carlo (TPMC) and Direct Simulation Monte Carlo (DSMC) are important tools in accurately computing physical drag coefficients. However, the methods are computationally expensive and cannot be employed real time. Therefore, modeling of the physical drag coefficient is required. This work presents a technique of developing parameterized drag coefficients models using the DSMC method. The technique is validated by developing a model for the Gravity Recovery and Climate Experiment (GRACE) satellite. Results show that drag coefficients computed using the developed model for GRACE agree to within 1% with those computed using DSMC.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
Quantum Monte Carlo methods and lithium cluster properties
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Stete, Katarina; Kern, Winfried V; Rieg, Siegbert; Serr, Annerose; Maurer, Christian; Tintelnot, Kathrin; Wagner, Dirk
2015-06-01
Infections with Histoplasma capsulatum are rare in Germany, and mostly imported from endemic areas. Infections can present as localized or disseminated diseases in immunocompromised as well as immunocompetent hosts. A travel history may be a major clue for diagnosing histoplasmosis. Diagnostic tools include histology, cultural and molecular detection as well as serology. Here we present four cases of patients diagnosed and treated in Freiburg between 2004 and 2013 that demonstrate the broad range of clinical manifestations of histoplasmosis: an immunocompetent patient with chronic basal meningitis; a patient with HIV infection and fatal disseminated disease; a patient with pulmonary and cutaneous disease and mediastinal and cervical lymphadenopathy; and an immunosuppressed patient with disseminated involvement of lung, bone marrow and adrenal glands.
Sample Size Tables, "t" Test, and a Prevalent Psychometric Distribution.
ERIC Educational Resources Information Center
Sawilowsky, Shlomo S.; Hillman, Stephen B.
Psychology studies often have low statistical power. Sample size tables, as given by J. Cohen (1988), may be used to increase power, but they are based on Monte Carlo studies of relatively "tame" mathematical distributions, as compared to psychology data sets. In this study, Monte Carlo methods were used to investigate Type I and Type II…
Event group importance measures for top event frequency analyses
1995-07-31
Three traditional importance measures, risk reduction, partial derivative, nd variance reduction, have been extended to permit analyses of the relative importance of groups of underlying failure rates to the frequencies of resulting top events. The partial derivative importance measure was extended by assessing the contribution of a group of events to the gradient of the top event frequency. Given the moments of the distributions that characterize the uncertainties in the underlying failure rates, the expectation values of the top event frequency, its variance, and all of the new group importance measures can be quantified exactly for two familiar cases: (1) when all underlying failure rates are presumed independent, and (2) when pairs of failure rates based on common data are treated as being equal (totally correlated). In these cases, the new importance measures, which can also be applied to assess the importance of individual events, obviate the need for Monte Carlo sampling. The event group importance measures are illustrated using a small example problem and demonstrated by applications made as part of a major reactor facility risk assessment. These illustrations and applications indicate both the utility and the versatility of the event group importance measures.
Valence-bond quantum Monte Carlo algorithms defined on trees.
Deschner, Andreas; Sørensen, Erik S
2014-09-01
We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Autocorrelation and Dominance Ratio in Monte Carlo Criticality Calculations
Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Kornreich, Drew E.
2003-11-15
The cycle-to-cycle correlation (autocorrelation) in Monte Carlo criticality calculations is analyzed concerning the dominance ratio of fission kernels. The mathematical analysis focuses on how the eigenfunctions of a fission kernel decay if operated on by the cycle-to-cycle error propagation operator of the Monte Carlo stationary source distribution. The analytical results obtained can be summarized as follows: When the dominance ratio of a fission kernel is close to unity, autocorrelation of the k-effective tallies is weak and may be negligible, while the autocorrelation of the source distribution is strong and decays slowly. The practical implication is that when one analyzes a critical reactor with a large dominance ratio by Monte Carlo methods, the confidence interval estimation of the fission rate and other quantities at individual locations must account for the strong autocorrelation. Numerical results are presented for sample problems with a dominance ratio of 0.85-0.99, where Shannon and relative entropies are utilized to exclude the influence of initial nonstationarity.
Accelerating Monte Carlo power studies through parametric power estimation.
Ueckert, Sebastian; Karlsson, Mats O; Hooker, Andrew C
2016-04-01
Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.
Improved Monte Carlo Renormalization Group Method
DOE R&D Accomplishments Database
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
PEREGRINE: Bringing Monte Carlo based treatment planning calculations to today's clinic
Patterson, R; Daly, T; Garrett, D; Hartmann-Siantar, C; House, R; May, S
1999-12-13
Monte Carlo simulation of radiotherapy is now available for routine clinical use. It brings improved accuracy of dose calculations for treatments where important physics comes into play, and provides a robust, general tool for planning where empirical solutions have not been implemented. Through the use of Monte Carlo, new information, including the effects of the composition of materials in the patient, the effects of electron transport, and the details of the distribution of energy deposition, can be applied to the field. PEREGRINE{trademark} is a Monte Carlo dose calculation solution that was designed and built specifically for the purpose of providing a practical, affordable Monte Carlo capability to the clinic. The system solution was crafted to facilitate insertion of this powerful tool into day-to-day treatment planning, while being extensible to accommodate improvements in techniques, computers, and interfaces.
NASA Astrophysics Data System (ADS)
Dick, G. J.; Andersson, A.; Banfield, J. F.
2007-12-01
Our understanding of environmental microbiology has been greatly enhanced by community genome sequencing of DNA recovered directly the environment. Community genomics provides insights into the diversity, community structure, metabolic function, and evolution of natural populations of uncultivated microbes, thereby revealing dynamics of how microorganisms interact with each other and their environment. Recent studies have demonstrated the potential for reconstructing near-complete genomes from natural environments while highlighting the challenges of analyzing community genomic sequence, especially from diverse environments. A major challenge of shotgun community genome sequencing is identification of DNA fragments from minor community members for which only low coverage of genomic sequence is present. We analyzed community genome sequence retrieved from biofilms in an acid mine drainage (AMD) system in the Richmond Mine at Iron Mountain, CA, with an emphasis on identification and assembly of DNA fragments from low-abundance community members. The Richmond mine hosts an extensive, relatively low diversity subterranean chemolithoautotrophic community that is sustained entirely by oxidative dissolution of pyrite. The activity of these microorganisms greatly accelerates the generation of AMD. Previous and ongoing work in our laboratory has focused on reconstrucing genomes of dominant community members, including several bacteria and archaea. We binned contigs from several samples (including one new sample and two that had been previously analyzed) by tetranucleotide frequency with clustering by Self-Organizing Maps (SOM). The binning, evaluated by comparison with information from the manually curated assembly of the dominant organisms, was found to be very effective: fragments were correctly assigned with 95% accuracy. Improperly assigned fragments often contained sequences that are either evolutionarily constrained (e.g. 16S rRNA genes) or mobile elements that are
Kernel density estimator methods for Monte Carlo radiation transport
NASA Astrophysics Data System (ADS)
Banerjee, Kaushik
In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that
Monte Carlo simulation of particle acceleration at astrophysical shocks
NASA Technical Reports Server (NTRS)
Campbell, Roy K.
1989-01-01
A Monte Carlo code was developed for the simulation of particle acceleration at astrophysical shocks. The code is implemented in Turbo Pascal on a PC. It is modularized and structured in such a way that modification and maintenance are relatively painless. Monte Carlo simulations of particle acceleration at shocks follow the trajectories of individual particles as they scatter repeatedly across the shock front, gaining energy with each crossing. The particles are assumed to scatter from magnetohydrodynamic (MHD) turbulence on both sides of the shock. A scattering law is used which is related to the assumed form of the turbulence, and the particle and shock parameters. High energy cosmic ray spectra derived from Monte Carlo simulations have observed power law behavior just as the spectra derived from analytic calculations based on a diffusion equation. This high energy behavior is not sensitive to the scattering law used. In contrast with Monte Carlo calculations diffusive calculations rely on the initial injection of supra-thermal particles into the shock environment. Monte Carlo simulations are the only known way to describe the extraction of particles directly from the thermal pool. This was the triumph of the Monte Carlo approach. The question of acceleration efficiency is an important one in the shock acceleration game. The efficiency of shock waves efficient to account for the observed flux of high energy galactic cosmic rays was examined. The efficiency of the acceleration process depends on the thermal particle pick-up and hence the low energy scattering in detail. One of the goals is the self-consistent derivation of the accelerated particle spectra and the MHD turbulence spectra. Presumably the upstream turbulence, which scatters the particles so they can be accelerated, is excited by the streaming accelerated particles and the needed downstream turbulence is convected from the upstream region. The present code is to be modified to include a better
First principles Monte Carlo simulations of aggregation in the vapor phase of hydrogen fluoride
McGrath, Matthew J.; Ghogomu, Julius. N.; Mundy, Christopher J.; Kuo, I-F. Will; Siepmann, J. Ilja
2010-01-01
The aggregation of superheated hydrogen fluoride vapor is explored through the use of Monte Carlo simulations employing Kohn-Sham density functional theory with the exchange/correlation functional of Becke-Lee-Yang-Parr to describe the molecular interactions. Simulations were carried out in the canonical ensemble for a system consisting of ten molecules at constant density (2700 Å^{3}/molecule) and at three different temperatures (T = 310, 350, and 390 K). Aggregation-volume-bias and configurational-bias Monte Carlo approaches (along with pre-sampling with an approximate potential) were employed to increase the sampling efficiency of cluster formation and destruction.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Lazopoulos, Achilleas
2006-07-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the absence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Monte Carlo docking with ubiquitin.
Cummings, M. D.; Hart, T. N.; Read, R. J.
1995-01-01
The development of general strategies for the performance of docking simulations is prerequisite to the exploitation of this powerful computational method. Comprehensive strategies can only be derived from docking experiences with a diverse array of biological systems, and we have chosen the ubiquitin/diubiquitin system as a learning tool for this process. Using our multiple-start Monte Carlo docking method, we have reconstructed the known structure of diubiquitin from its two halves as well as from two copies of the uncomplexed monomer. For both of these cases, our relatively simple potential function ranked the correct solution among the lowest energy configurations. In the experiments involving the ubiquitin monomer, various structural modifications were made to compensate for the lack of flexibility and for the lack of a covalent bond in the modeled interaction. Potentially flexible regions could be identified using available biochemical and structural information. A systematic conformational search ruled out the possibility that the required covalent bond could be formed in one family of low-energy configurations, which was distant from the observed dimer configuration. A variety of analyses was performed on the low-energy dockings obtained in the experiment involving structurally modified ubiquitin. Characterization of the size and chemical nature of the interface surfaces was a powerful adjunct to our potential function, enabling us to distinguish more accurately between correct and incorrect dockings. Calculations with the structure of tetraubiquitin indicated that the dimer configuration in this molecule is much less favorable than that observed in the diubiquitin structure, for a simple monomer-monomer pair. Based on the analysis of our results, we draw conclusions regarding some of the approximations involved in our simulations, the use of diverse chemical and biochemical information in experimental design and the analysis of docking results, as well as
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Johnson, J.O.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Monte Carlo simulations of electron transport in strongly attaching gases
NASA Astrophysics Data System (ADS)
Petrovic, Zoran; Miric, Jasmina; Simonovic, Ilija; Bosnjakovic, Danko; Dujko, Sasa
2016-09-01
Extensive loss of electrons in strongly attaching gases imposes significant difficulties in Monte Carlo simulations at low electric field strengths. In order to compensate for such losses, some kind of rescaling procedures must be used. In this work, we discuss two rescaling procedures for Monte Carlo simulations of electron transport in strongly attaching gases: (1) discrete rescaling, and (2) continuous rescaling. The discrete rescaling procedure is based on duplication of electrons randomly chosen from the remaining swarm at certain discrete time steps. The continuous rescaling procedure employs a dynamically defined fictitious ionization process with the constant collision frequency chosen to be equal to the attachment collision frequency. These procedures should not in any way modify the distribution function. Monte Carlo calculations of transport coefficients for electrons in SF6 and CF3I are performed in a wide range of electric field strengths. However, special emphasis is placed upon the analysis of transport phenomena in the limit of lower electric fields where the transport properties are strongly affected by electron attachment. Two important phenomena arise: (1) the reduction of the mean energy with increasing E/N for electrons in SF6, and (2) the occurrence of negative differential conductivity in the bulk drift velocity of electrons in both SF6 and CF3I.
A novel parallel-rotation algorithm for atomistic Monte Carlo simulation of dense polymer systems
NASA Astrophysics Data System (ADS)
Santos, S.; Suter, U. W.; Müller, M.; Nievergelt, J.
2001-06-01
We develop and test a new elementary Monte Carlo move for use in the off-lattice simulation of polymer systems. This novel Parallel-Rotation algorithm (ParRot) permits moving very efficiently torsion angles that are deeply inside long chains in melts. The parallel-rotation move is extremely simple and is also demonstrated to be computationally efficient and appropriate for Monte Carlo simulation. The ParRot move does not affect the orientation of those parts of the chain outside the moving unit. The move consists of a concerted rotation around four adjacent skeletal bonds. No assumption is made concerning the backbone geometry other than that bond lengths and bond angles are held constant during the elementary move. Properly weighted sampling techniques are needed for ensuring detailed balance because the new move involves a correlated change in four degrees of freedom along the chain backbone. The ParRot move is supplemented with the classical Metropolis Monte Carlo, the Continuum-Configurational-Bias, and Reptation techniques in an isothermal-isobaric Monte Carlo simulation of melts of short and long chains. Comparisons are made with the capabilities of other Monte Carlo techniques to move the torsion angles in the middle of the chains. We demonstrate that ParRot constitutes a highly promising Monte Carlo move for the treatment of long polymer chains in the off-lattice simulation of realistic models of dense polymer systems.
Exploring mass perception with Markov chain Monte Carlo.
Cohen, Andrew L; Ross, Michael G
2009-12-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants' perceptions of different collision mass ratios. The results reveal interparticipant differences and a qualitative distinction between the perception of 1:1 and 1:2 ratios. The results strongly suggest that participants' perceptions of 1:1 collisions are described by simple heuristics. The evidence for 1:2 collisions favors heuristic perception models that are sensitive to the sign but not the magnitude of perceived mass differences.
Multidimensional master equation and its Monte-Carlo simulation.
Pang, Juan; Bai, Zhan-Wu; Bao, Jing-Dong
2013-02-28
We derive an integral form of multidimensional master equation for a markovian process, in which the transition function is obtained in terms of a set of discrete Langevin equations. The solution of master equation, namely, the probability density function is calculated by using the Monte-Carlo composite sampling method. In comparison with the usual Langevin-trajectory simulation, the present approach decreases effectively coarse-grained error. We apply the master equation to investigate time-dependent barrier escape rate of a particle from a two-dimensional metastable potential and show the advantage of this approach in the calculations of quantities that depend on the probability density function.
Non-Boltzmann Ensembles and Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Murthy, K. P. N.
2016-10-01
Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g
Neutron stimulated emission computed tomography: a Monte Carlo simulation approach.
Sharma, A C; Harrawood, B P; Bender, J E; Tourassi, G D; Kapadia, A J
2007-10-21
A Monte Carlo simulation has been developed for neutron stimulated emission computed tomography (NSECT) using the GEANT4 toolkit. NSECT is a new approach to biomedical imaging that allows spectral analysis of the elements present within the sample. In NSECT, a beam of high-energy neutrons interrogates a sample and the nuclei in the sample are stimulated to an excited state by inelastic scattering of the neutrons. The characteristic gammas emitted by the excited nuclei are captured in a spectrometer to form multi-energy spectra. Currently, a tomographic image is formed using a collimated neutron beam to define the line integral paths for the tomographic projections. These projection data are reconstructed to form a representation of the distribution of individual elements in the sample. To facilitate the development of this technique, a Monte Carlo simulation model has been constructed from the GEANT4 toolkit. This simulation includes modeling of the neutron beam source and collimation, the samples, the neutron interactions within the samples, the emission of characteristic gammas, and the detection of these gammas in a Germanium crystal. In addition, the model allows the absorbed radiation dose to be calculated for internal components of the sample. NSECT presents challenges not typically addressed in Monte Carlo modeling of high-energy physics applications. In order to address issues critical to the clinical development of NSECT, this paper will describe the GEANT4 simulation environment and three separate simulations performed to accomplish three specific aims. First, comparison of a simulation to a tomographic experiment will verify the accuracy of both the gamma energy spectra produced and the positioning of the beam relative to the sample. Second, parametric analysis of simulations performed with different user-defined variables will determine the best way to effectively model low energy neutrons in tissue, which is a concern with the high hydrogen content in
Monte Carlo Volcano Seismic Moment Tensors
NASA Astrophysics Data System (ADS)
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Improved diffusion Monte Carlo propagators for bosonic systems using Itô calculus.
Håkansson, P; Mella, M; Bressanini, Dario; Morosi, Gabriele; Patrone, Marta
2006-11-14
The construction of importance sampled diffusion Monte Carlo (DMC) schemes accurate to second order in the time step is discussed. A central aspect in obtaining efficient second order schemes is the numerical solution of the stochastic differential equation (SDE) associated with the Fokker-Plank equation responsible for the importance sampling procedure. In this work, stochastic predictor-corrector schemes solving the SDE and consistent with Itô calculus are used in DMC simulations of helium clusters. These schemes are numerically compared with alternative algorithms obtained by splitting the Fokker-Plank operator, an approach that we analyze using the analytical tools provided by Ito; calculus. The numerical results show that predictor-corrector methods are indeed accurate to second order in the time step and that they present a smaller time step bias and a better efficiency than second order split-operator derived schemes when computing ensemble averages for bosonic systems. The possible extension of the predictor-corrector methods to higher orders is also discussed.
Identifying influential observations in Bayesian models by using Markov chain Monte Carlo.
Jackson, Dan; White, Ian R; Carpenter, James
2012-05-20
In statistical modelling, it is often important to know how much parameter estimates are influenced by particular observations. An attractive approach is to re-estimate the parameters with each observation deleted in turn, but this is computationally demanding when fitting models by using Markov chain Monte Carlo (MCMC), as obtaining complete sample estimates is often in itself a very time-consuming task. Here we propose two efficient ways to approximate the case-deleted estimates by using output from MCMC estimation. Our first proposal, which directly approximates the usual influence statistics in maximum likelihood analyses of generalised linear models (GLMs), is easy to implement and avoids any further evaluation of the likelihood. Hence, unlike the existing alternatives, it does not become more computationally intensive as the model complexity increases. Our second proposal, which utilises model perturbations, also has this advantage and does not require the form of the GLM to be specified. We show how our two proposed methods are related and evaluate them against the existing method of importance sampling and case deletion in a logistic regression analysis with missing covariates. We also provide practical advice for those implementing our procedures, so that they may be used in many situations where MCMC is used to fit statistical models.
Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms.
Casey, Fergal P; Waterfall, Joshua J; Gutenkunst, Ryan N; Myers, Christopher R; Sethna, James P
2008-10-01
We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.
Monte Carlo based calibration of an air monitoring system for gamma and beta+ radiation.
Sarnelli, A; Negrini, M; D'Errico, V; Bianchini, D; Strigari, L; Mezzenga, E; Menghi, E; Marcocci, F; Benassi, M
2015-11-01
Marinelli beaker systems are used to monitor the activity of radioactive samples. These systems are usually calibrated with water solutions and the determination of the activity in gases requires correction coefficients accounting for the different mass-thickness of the sample. For beta+ radionuclides the different distribution of the positrons annihilation points should be also considered. In this work a Monte Carlo simulation based on Geant4 is used to compute correction coefficients for the measurement of the activity of air samples.
Zonios, George
2014-09-01
Knowledge of light penetration characteristics is very important in almost all studies in biomedical optics. In this work, the reflectance sampling depth in biological tissues was investigated using Monte Carlo simulations for various common illumination/collection configurations. The analysis shows that the average sampling depth can be described by two simple empirical analytical expressions over the entire typical ranges of absorption and scattering properties relevant to in vivo biological tissue, regardless of the specific illumination/collection configuration details. These results are promising and helpful for the quick, efficient, and accurate design of reflectance studies for various biological tissue applications.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
NASA Astrophysics Data System (ADS)
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
NASA Astrophysics Data System (ADS)
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
NASA Astrophysics Data System (ADS)
Lin, Lin; Zhang, Mei
2015-02-01
The scaling Monte Carlo method and Gaussian model are applied to simulate the transportation of light beam with arbitrary waist radius. Much of the time, Monte Carlo simulation is performed for pencil or cone beam where the initial status of the photon is identical. In practical application, incident light is always focused on the sample to form approximate Gauss distribution on the surface. With alteration of focus position in the sample, the initial status of the photon will not be identical any more. Using the hyperboloid method, the initial reflect angle and coordinates are generated statistically according to the size of Gaussian waist and focus depth. Scaling calculation is performed with baseline data from standard Monte Carlo simulation. The scaling method incorporated with the Gaussian model was tested, and proved effective over a range of scattering coefficients from 20% to 180% relative to the value used in baseline simulation. In most cases, percentage error was less than 10%. The increasing of focus depth will result in larger error of scaled radial reflectance in the region close to the optical axis. In addition to evaluating accuracy of scaling the Monte Carlo method, this study has given implications for inverse Monte Carlo with arbitrary parameters of optical system.
NASA Astrophysics Data System (ADS)
Alexander, Andrew William
Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and
Monte-Carlo histories of refractory interstellar dust
NASA Technical Reports Server (NTRS)
Clayton, D. D.; Liffman, K.
1988-01-01
Monte-carlo histories of 6 x 10 to the 6th individual dust particles injected uniformly from stars into the interstellar medium during a 6 x 10 to the 9th year history are calculated. The particles are given a two-phase internal structure of successive thermal condensates, and are distributed in initial radius as 1/a-cubed over the value of a between 0.01 and 0.1 micron. The evolution of this system illustrates the distinction between several different lifetimes for interstellar dust. Most are destroyed, but some grow in size. Several important consequences for interstellar dust are described.
Parallelized quantum Monte Carlo algorithm with nonlocal worm updates.
Masaki-Kato, Akiko; Suzuki, Takafumi; Harada, Kenji; Todo, Synge; Kawashima, Naoki
2014-04-11
Based on the worm algorithm in the path-integral representation, we propose a general quantum Monte Carlo algorithm suitable for parallelizing on a distributed-memory computer by domain decomposition. Of particular importance is its application to large lattice systems of bosons and spins. A large number of worms are introduced and its population is controlled by a fictitious transverse field. For a benchmark, we study the size dependence of the Bose-condensation order parameter of the hard-core Bose-Hubbard model with L×L×βt=10240×10240×16, using 3200 computing cores, which shows good parallelization efficiency.
Error propagation in first-principles kinetic Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Matera, Sebastian
2017-04-01
First-principles kinetic Monte Carlo models allow for the modeling of catalytic surfaces with predictive quality. This comes at the price of non-negligible errors induced by the underlying approximate density functional calculation. On the example of CO oxidation on RuO2(110), we demonstrate a novel, efficient approach to global sensitivity analysis, with which we address the error propagation in these multiscale models. We find, that we can still derive the most important atomistic factors for reactivity, albeit the errors in the simulation results are sizable. The presented approach might also be applied in the hierarchical model construction or computational catalyst screening.
Monte Carlo simulation with fixed steplength for diffusion processes in nonhomogeneous media
NASA Astrophysics Data System (ADS)
Ruiz Barlett, V.; Hoyuelos, M.; Mártin, H. O.
2013-04-01
Monte Carlo simulation is one of the most important tools in the study of diffusion processes. For constant diffusion coefficients, an appropriate Gaussian distribution of particle's steplengths can generate exact results, when compared with integration of the diffusion equation. It is important to notice that the same method is completely erroneous when applied to non-homogeneous diffusion coefficients. A simple alternative, jumping at fixed steplengths with appropriate transition probabilities, produces correct results. Here, a model for diffusion of calcium ions in the neuromuscular junction of the crayfish is used as a test to compare Monte Carlo simulation with fixed and Gaussian steplength.
Obtaining representative ground water samples is important for site assessment and
remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
Finding organic vapors - a Monte Carlo approach
NASA Astrophysics Data System (ADS)
Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku
2010-05-01
drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.
Monte Carlo simulation of classical spin models with chaotic billiards.
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Monte Carlo simulation of classical spin models with chaotic billiards
NASA Astrophysics Data System (ADS)
Suzuki, Hideyuki
2013-11-01
It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.
Self-learning Monte Carlo method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
Adiabatic optimization versus diffusion Monte Carlo methods
NASA Astrophysics Data System (ADS)
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Monte Carlo simulation of two-component aerosol processes
NASA Astrophysics Data System (ADS)
Huertas, Jose Ignacio
Aerosol processes have been extensively used for production of nanophase materials. However when temperatures and number densities are high, particle agglomeration is a serious drawback for these techniques. This problem can be addressed by encapsulating the particles with a second material before they agglomerate. These particles will agglomerate but the primary particles within them will not. When the encapsulation is later removed, the resulting powder will contain only weakly agglomerated particles. To demonstrate the applicability of the particle encapsulation method for the production of high purity unagglomerated nanosize materials, tungsten (W) and tungsten titanium alloy (W-Ti) particles were synthesized in a sodium/halide flame. The particles were characterized by XRD, SEM, TEM and EDAX. The particles appeared unagglomerated, cubic and hexagonal in shape, and had a size of 30-50 nm. No contamination was detected even after extended exposure to atmospheric conditions. The nanosized W and W-Ti particles were consolidated into pellets of 6 mm diameter and 6-8 mm long. Hardness measurements indicate values 4 times that of conventional tungsten. 100% densification was achieved by hipping the samples. To study the particle encapsulation method, a code to simulate particle formation in two component aerosols was developed. The simulation was carried out using a Monte Carlo technique. This approach allowed for the treatment of both probabilistic and deterministic events. Thus, the coagulation term of the general dynamic equation (GDE) was Monte Carlo simulated, and the condensation term was solved analytically and incorporated into the model. The model includes condensation, coagulation, sources, and sinks for two-component aerosol processes. The Kelvin effect has been included in the model as well. The code is general and does not suffer from problems associated with mass conservation, high rates of condensation and approximations on particle composition. It has
A Monte Carlo Study of Using the First Eigenvalue for Averaging Intercorrelations.
ERIC Educational Resources Information Center
Dunlap, William P.; And Others
1987-01-01
A procedure proposed by H. F. Kaiser (1968) for averaging coefficients using the first eigenvalue of an intercorrelation matrix was studied via Monte Carlo methods. The study also assessed a modification of the Kaiser procedure and the use of Fisher's "z." Applications to sample size effects are discussed. (TJH)
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha
ERIC Educational Resources Information Center
Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.
2010-01-01
The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…
Confidence Interval Estimation of KR sub 20--Some Monte Carlo Results.
ERIC Educational Resources Information Center
Mandeville, Garrett K.
An investigation is conducted which presents extensive Monte Carlo results which indicate the conditions under which a procedure using the F distribution can be used to study the robustness of the confidence interval procedures for small samples. A review of the literature is presented. Procedure uses a binary data matrix. Results indicate that…
Practical and conceptual path sampling issues
NASA Astrophysics Data System (ADS)
Bolhuis, P. G.; Dellago, C.
2015-09-01
In the past 15 years transition path sampling (TPS) has evolved from its basic algorithm to an entire collection of methods and a framework for investigating rare events in complex systems. The methodology is applicable to a wide variety of systems and processes, ranging from transitions in small clusters or molecules to chemical reactions, phase transitions, and conformational changes in biomolecules. The basic idea of TPS is to harvest dynamical unbiased trajectories that connect a reactant with a product, by a Markov Chain Monte Carlo procedure called shooting. This simple importance sampling yields the rate constants, the free energy surface, insight in the mechanism of the rare event of interest, and by using the concept of the committor, also access to the reaction coordinate. In the last decade extensions to TPS have been developed, notably the transition interface sampling (TIS) methods, and its generalization multiple state TIS. Combination with advanced sampling methods such as replica exchange and the Wang-Landau algorithm, among others, improves sampling efficiency. Notwithstanding the success of TPS, there are issues left to discuss, and, despite the method's apparent simplicity, many pitfalls to avoid. This paper discusses several of these issues and pitfalls: the choice of stable states and interface order parameters, the problem of positioning the TPS windows and TIS interfaces, the matter of convergence of the path ensemble, the matter of kinetic traps, and the question whether TPS is able to investigate and sample Markov state models. We also review the reweighting technique used to join path ensembles. Finally we discuss the use of the sampled path ensemble to obtain reaction coordinates.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Geodesic Monte Carlo on Embedded Manifolds
Byrne, Simon; Girolami, Mark
2013-01-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo simulation of neutron scattering instruments
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
NASA Technical Reports Server (NTRS)
Gayda, J.
1994-01-01
A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, has been developed to simulate microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. Since many of the physical properties of materials are determined by microstructure, it is important to be able to predict and control microstructural development. MCFET uses a microstructural lattice model that can incorporate all relevant driving forces and kinetic considerations. Unlike molecular dynamics, this approach was developed specifically to predict macroscopic behavior, not atomistic behavior. In this approach, the microstructure is discretized into a fine lattice. Each element in the lattice is labeled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis has been validated by comparing this approach with a closed-form, analytical method for stress-assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analyses for multiparticle problems have also been run and, in general, the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperatures. This program is written in FORTRAN for use on a 370 series IBM mainframe. It has been implemented on an IBM 370 running VM/SP and an IBM 3084 running MVS. It requires the IMSL math library and 220K of RAM for execution. The standard distribution medium for this program is a 9-track 1600 BPI magnetic tape in EBCDIC format.
Hunt, R.J.; Steuer, J.J.; Mansor, M.T.C.; Bullen, T.D.
2001-01-01
Recharge areas of spring systems can be hard to identify, but they can be critically important for protection of a spring resource. A recharge area for a spring complex in southern Wisconsin was delineated using a variety of complementary techniques. A telescopic mesh refinement (TMR) model was constructed from an existing regional-scale ground water flow model. This TMR model was formally optimized using parameter estimation techniques; the optimized "best fit" to measured heads and fluxes was obtained by using a horizontal hydraulic conductivity 200% larger than the original regional model for the upper bedrock aquifer and 80% smaller for the lower bedrock aquifer. The uncertainty in hydraulic conductivity was formally considered using a stochastic Monte Carlo approach. Two-hundred model runs used uniformly distributed, randomly sampled, horizontal hydraulic conductivity values within the range given by the TMR optimized values and the previously constructed regional model. A probability distribution of particles captured by the spring, or a "probabilistic capture zone," was calculated from the realistic Monte Carlo results (136 runs of 200). In addition to portions of the local surface watershed, the capture zone encompassed areas outside of the watershed - demonstrating that the ground watershed and surface watershed do not coincide. Analysis of water collected from the site identified relatively large contrasts in chemistry, even for springs within 15 m of one another. The differences showed a distinct gradation from Ordovician-carbonate-dominated water in western spring vents to Cambrian-sandstone-influenced water in eastern spring vents. The difference in chemistry was attributed to distinctive bedrock geology as demonstrated by overlaying the capture zone derived from numerical modeling over a bedrock geology map for the area. This finding gives additional confidence to the capture zone calculated by modeling.
Monte Carlo simulations of systems with complex energy landscapes
NASA Astrophysics Data System (ADS)
Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.
2009-04-01
Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.
Uncovering mental representations with Markov chain Monte Carlo.
Sanborn, Adam N; Griffiths, Thomas L; Shiffrin, Richard M
2010-03-01
A key challenge for cognitive psychology is the investigation of mental representations, such as object categories, subjective probabilities, choice utilities, and memory traces. In many cases, these representations can be expressed as a non-negative function defined over a set of objects. We present a behavioral method for estimating these functions. Our approach uses people as components of a Markov chain Monte Carlo (MCMC) algorithm, a sophisticated sampling method originally developed in statistical physics. Experiments 1 and 2 verified the MCMC method by training participants on various category structures and then recovering those structures. Experiment 3 demonstrated that the MCMC method can be used estimate the structures of the real-world animal shape categories of giraffes, horses, dogs, and cats. Experiment 4 combined the MCMC method with multidimensional scaling to demonstrate how different accounts of the structure of categories, such as prototype and exemplar models, can be tested, producing samples from the categories of apples, oranges, and grapes.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding)
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Wüst, Thomas; Landau, David P.
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
Monte Carlo simulations of the HP model (the "Ising model" of protein folding).
Li, Ying Wai; Wüst, Thomas; Landau, David P
2011-09-01
Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.
De Niederhäusern, Simona; Bondi, Moreno; Anacarso, Immacolata; Iseppi, Ramona; Sabia, Carla; Bitonte, Fabiano; Messi, Patrizia
2013-01-01
Considering the limited knowledge about the biological characters in enterococci isolated from surface waters, we investigated antibiotic and heavy-metal resistance, bacteriocin production, and some important virulence traits of 165 enterococci collected in water samples from Monte Cotugno Lake, the largest artificial basin built with earth in Europe. The species distribution of isolates was as follows: Enterococcus faecium (80%), Enterococcus faecalis (12.7%), Enterococcus casseliflavus (3%), Enterococcus mundtii (1.8%), Enterococcus hirae (1.8%), Enterococcus durans (0.6%). All enterococci showed heavy metal resistance toward Cu, Ni, Pb and Zn, were susceptible to Ag and Hg, and at the same time exhibited in large percentage (83.7%) resistance to one or more of the antibiotics tested. Relatively to virulence factor genes, 50.9% enterococci were positive for gelatinase (gelE), 10.9% for aggregation substance (agg), 12.7% and 66.6% for the cell wall adhesins (efaAfs and efaAfm), respectively. No amplicons were detected after PCR for cytolysin production (cylA, cylB and cylM) and enterococcal surface protein (esp) genes. Bacteriocin production was found in most of the isolates. Given that the waters of the Monte Cotugno Lake are used for different purposes, among which farming and recreational activities, they can contribute to spread enterococci endowed with virulence factors, and antibiotics and heavy metals resistance to humans.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Bayesian model comparison in cosmology with Population Monte Carlo
NASA Astrophysics Data System (ADS)
Kilbinger, Martin; Wraith, Darren; Robert, Christian P.; Benabed, Karim; Cappé, Olivier; Cardoso, Jean-François; Fort, Gersende; Prunet, Simon; Bouchet, François R.
2010-07-01
We use Bayesian model selection techniques to test extensions of the standard flat Λ cold dark matter (ΛCDM) paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. In contrast to the case of other sampling-based inference techniques such as Markov chain Monte Carlo (MCMC), the Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Also, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent cosmic microwave background, type Ia supernovae and baryonic acoustic oscillation data, we find inconclusive evidence between flat ΛCDM and simple dark-energy models. A curved universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r = 0.
Anisotropic seismic inversion using a multigrid Monte Carlo approach
NASA Astrophysics Data System (ADS)
Mewes, Armin; Kulessa, Bernd; McKinley, John D.; Binley, Andrew M.
2010-10-01
We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (ɛ, δ). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (ɛ, δ) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen ɛ parameter is particularly robust compared to that of slowness and the Thomsen δ parameter, even in the face of complex subsurface anomalies. The Thomsen ɛ and δ parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that ɛ and δ in the MGMC scheme, inverted images of phase velocity reflect the integrated
Scalable Domain Decomposed Monte Carlo Particle Transport
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo Simulation of Counting Experiments.
ERIC Educational Resources Information Center
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
A comparison of Monte Carlo generators
Golan, Tomasz
2015-05-15
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π{sup +} two-dimensional energy vs cosine distribution.
Structural Reliability and Monte Carlo Simulation.
ERIC Educational Resources Information Center
Laumakis, P. J.; Harlow, G.
2002-01-01
Analyzes a simple boom structure and assesses its reliability using elementary engineering mechanics. Demonstrates the power and utility of Monte-Carlo simulation by showing that such a simulation can be implemented more readily with results that compare favorably to the theoretical calculations. (Author/MM)
Search and Rescue Monte Carlo Simulation.
1985-03-01
confidence interval ) of the number of lives saved. A single page output and computer graphic present the information to the user in an easily understood...format. The confidence interval can be reduced by making additional runs of this Monte Carlo model. (Author)
Monte Carlo studies of ARA detector optimization
NASA Astrophysics Data System (ADS)
Stockham, Jessica
2013-04-01
The Askaryan Radio Array (ARA) is a neutrino detector deployed in the Antarctic ice sheet near the South Pole. The array is designed to detect ultra high energy neutrinos in the range of 0.1-10 EeV. Detector optimization is studied using Monte Carlo simulations.
Murray, Aja Louise; McKenzie, Karen; Kuenssberg, Renate; O'Donnell, Michael
2014-11-01
The magnitude of symptom inter-correlations in diagnosed individuals has contributed to the evidence that autism spectrum disorders (ASD) is a fractionable disorder. Such correlations may substantially under-estimate the population correlations among symptoms due to simultaneous selection on the areas of deficit required for diagnosis. Using statistical simulations of this selection mechanism, we provide estimates of the extent of this bias, given different levels of population correlation between symptoms. We then use real data to compare domain inter-correlations in the Autism Spectrum Quotient, in those with ASD versus a combined ASD and non-ASD sample. Results from both studies indicate that samples restricted to individuals with a diagnosis of ASD potentially substantially under-estimate the magnitude of association between features of ASD.
... repeat the test with blood drawn from a vein. Alternative Names Blood sample - capillary; Fingerstick; Heelstick Images Phenylketonuria test Phenylketonuria test Capillary sample References Garza ...
Monte Carlo Green's function formalism for the propagation of partially coherent light.
Prahl, Scott A; Fischer, David G; Duncan, Donald D
2009-07-01
We present a Monte Carlo-derived Green's function for the propagation of partially spatially coherent fields. This Green's function, which is derived by sampling Huygens-Fresnel wavelets, can be used to propagate fields through an optical system and to compute first- and second-order field statistics directly. The concept is illustrated for a cylindrical f/1 imaging system. A Gaussian copula is used to synthesize realizations of a Gaussian Schell-model field in the pupil plane. Physical optics and Monte Carlo predictions are made for the first- and second-order statistics of the field in the vicinity of the focal plane for a variety of source coherence conditions. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. This formalism can be generally employed to treat the interaction of partially coherent fields with diffracting structures.
MULTILEVEL MONTE CARLO (MLMC) SIMULATIONS: PERFORMANCE RESULTS FOR SPE10 (XY SLICES)
Kalchev, Delyan; Vassilevski, Panayot S.
2016-02-26
In this report we first describe a generic multilevel Monte Carlo method and then illustrate its superior performance over a traditional single-level Monte Carlo method for second order elliptic PDEs corresponding to two-dimensional layers in (x, y)-direction of the Tenth SPE Comparative Solution project (SPE 10) which gives high-contrast permeability coefficients. The SPE10 data set is used as a coarse level in the Monte Carlo method and the respective permeability coefficient k (provided in the SPE10 dataset) is used as a mean in the simulation. The actual coefficients are drawn based on a KL-expansion assuming that the log-mean is perturbed by a log-normal distributed samples.
Begy, Robert-Csaba; Cosma, Constantin; Timar, Alida; Fulea, Dan
2009-05-01
The 1001 keV gamma line of (234m)Pa became important in gamma spectrometric measurements of samples with (238)U content with the advent of development of HpGe detectors of great dimension and high efficiency. In this study the emission probability of the 1001 keV (Y(gamma)) peak of (234m)Pa, was determined by gamma-ray spectrometric measurements performed on glass with Uranium content using Monte Carlo simulation code for efficiency calibration. This method of calculation was not applied for the values quoted in literature so far, at least to our knowledge. The measurements gave an average of 0.836 +/- 0.022%, a value that is in very good agreement to some of the recent results previously presented.
The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012
NASA Astrophysics Data System (ADS)
Keen, David A.; Pusztai, László
2013-11-01
This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since
Infinite variance in fermion quantum Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.